00:00:00.001 Started by upstream project "autotest-per-patch" build number 132792 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.027 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.044 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.066 Using shallow fetch with depth 1 00:00:00.066 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.066 > git --version # timeout=10 00:00:00.092 > git --version # 'git version 2.39.2' 00:00:00.092 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.111 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.111 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.652 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.664 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.675 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.675 > git config core.sparsecheckout # timeout=10 00:00:02.686 > git read-tree -mu HEAD # timeout=10 00:00:02.701 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.723 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.724 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.842 [Pipeline] Start of Pipeline 00:00:02.855 [Pipeline] library 00:00:02.857 Loading library shm_lib@master 00:00:02.857 Library shm_lib@master is cached. Copying from home. 00:00:02.880 [Pipeline] node 00:00:02.887 Running on VM-host-WFP7 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:02.892 [Pipeline] { 00:00:02.921 [Pipeline] catchError 00:00:02.923 [Pipeline] { 00:00:02.937 [Pipeline] wrap 00:00:02.945 [Pipeline] { 00:00:02.950 [Pipeline] stage 00:00:02.951 [Pipeline] { (Prologue) 00:00:02.964 [Pipeline] echo 00:00:02.966 Node: VM-host-WFP7 00:00:02.972 [Pipeline] cleanWs 00:00:02.982 [WS-CLEANUP] Deleting project workspace... 00:00:02.982 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.990 [WS-CLEANUP] done 00:00:03.210 [Pipeline] setCustomBuildProperty 00:00:03.319 [Pipeline] httpRequest 00:00:04.314 [Pipeline] echo 00:00:04.316 Sorcerer 10.211.164.112 is alive 00:00:04.326 [Pipeline] retry 00:00:04.328 [Pipeline] { 00:00:04.342 [Pipeline] httpRequest 00:00:04.347 HttpMethod: GET 00:00:04.348 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.349 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.350 Response Code: HTTP/1.1 200 OK 00:00:04.350 Success: Status code 200 is in the accepted range: 200,404 00:00:04.351 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.507 [Pipeline] } 00:00:04.519 [Pipeline] // retry 00:00:04.527 [Pipeline] sh 00:00:04.812 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.826 [Pipeline] httpRequest 00:00:05.245 [Pipeline] echo 00:00:05.246 Sorcerer 10.211.164.112 is alive 00:00:05.256 [Pipeline] retry 00:00:05.258 [Pipeline] { 00:00:05.271 [Pipeline] httpRequest 00:00:05.284 HttpMethod: GET 00:00:05.285 URL: http://10.211.164.112/packages/spdk_25cdf096c11aeb80bb79dadada3f8676a9e00f0e.tar.gz 00:00:05.286 Sending request to url: http://10.211.164.112/packages/spdk_25cdf096c11aeb80bb79dadada3f8676a9e00f0e.tar.gz 00:00:05.292 Response Code: HTTP/1.1 200 OK 00:00:05.292 Success: Status code 200 is in the accepted range: 200,404 00:00:05.293 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_25cdf096c11aeb80bb79dadada3f8676a9e00f0e.tar.gz 00:00:27.033 [Pipeline] } 00:00:27.051 [Pipeline] // retry 00:00:27.059 [Pipeline] sh 00:00:27.343 + tar --no-same-owner -xf spdk_25cdf096c11aeb80bb79dadada3f8676a9e00f0e.tar.gz 00:00:30.646 [Pipeline] sh 00:00:30.930 + git -C spdk log --oneline -n5 00:00:30.930 25cdf096c env: use 4-KiB memory mapping in no-huge mode 00:00:30.930 04ba75cf7 env: extend the page table to support 4-KiB mapping 00:00:30.930 b4f857a04 env: add mem_map_fini and vtophys_fini for cleanup 00:00:30.930 3fe025922 env: handle possible DPDK errors in mem_map_init 00:00:30.930 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:00:30.949 [Pipeline] writeFile 00:00:30.965 [Pipeline] sh 00:00:31.251 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:31.263 [Pipeline] sh 00:00:31.547 + cat autorun-spdk.conf 00:00:31.547 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.547 SPDK_TEST_NVMF=1 00:00:31.547 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.547 SPDK_TEST_URING=1 00:00:31.547 SPDK_TEST_USDT=1 00:00:31.547 SPDK_RUN_UBSAN=1 00:00:31.547 NET_TYPE=virt 00:00:31.547 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.554 RUN_NIGHTLY=0 00:00:31.556 [Pipeline] } 00:00:31.570 [Pipeline] // stage 00:00:31.586 [Pipeline] stage 00:00:31.588 [Pipeline] { (Run VM) 00:00:31.601 [Pipeline] sh 00:00:31.884 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:31.884 + echo 'Start stage prepare_nvme.sh' 00:00:31.884 Start stage prepare_nvme.sh 00:00:31.884 + [[ -n 1 ]] 00:00:31.884 + disk_prefix=ex1 00:00:31.884 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:31.884 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:31.884 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:31.884 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:31.884 ++ SPDK_TEST_NVMF=1 00:00:31.884 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:31.884 ++ SPDK_TEST_URING=1 00:00:31.884 ++ SPDK_TEST_USDT=1 00:00:31.884 ++ SPDK_RUN_UBSAN=1 00:00:31.884 ++ NET_TYPE=virt 00:00:31.884 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:31.884 ++ RUN_NIGHTLY=0 00:00:31.884 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:31.884 + nvme_files=() 00:00:31.884 + declare -A nvme_files 00:00:31.884 + backend_dir=/var/lib/libvirt/images/backends 00:00:31.884 + nvme_files['nvme.img']=5G 00:00:31.884 + nvme_files['nvme-cmb.img']=5G 00:00:31.884 + nvme_files['nvme-multi0.img']=4G 00:00:31.884 + nvme_files['nvme-multi1.img']=4G 00:00:31.884 + nvme_files['nvme-multi2.img']=4G 00:00:31.884 + nvme_files['nvme-openstack.img']=8G 00:00:31.884 + nvme_files['nvme-zns.img']=5G 00:00:31.884 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:31.884 + (( SPDK_TEST_FTL == 1 )) 00:00:31.884 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:31.884 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:31.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:31.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:31.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:31.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:31.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:31.884 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:31.884 + for nvme in "${!nvme_files[@]}" 00:00:31.884 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:32.825 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.825 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:32.825 + echo 'End stage prepare_nvme.sh' 00:00:32.825 End stage prepare_nvme.sh 00:00:32.835 [Pipeline] sh 00:00:33.116 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.116 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:33.116 00:00:33.116 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:33.116 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:33.116 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:33.116 HELP=0 00:00:33.116 DRY_RUN=0 00:00:33.116 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:33.116 NVME_DISKS_TYPE=nvme,nvme, 00:00:33.116 NVME_AUTO_CREATE=0 00:00:33.116 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:33.116 NVME_CMB=,, 00:00:33.116 NVME_PMR=,, 00:00:33.116 NVME_ZNS=,, 00:00:33.116 NVME_MS=,, 00:00:33.116 NVME_FDP=,, 00:00:33.116 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.116 SPDK_VAGRANT_VMCPU=10 00:00:33.116 SPDK_VAGRANT_VMRAM=12288 00:00:33.116 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.116 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.116 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.116 SPDK_OPENSTACK_NETWORK=0 00:00:33.116 VAGRANT_PACKAGE_BOX=0 00:00:33.116 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.116 FORCE_DISTRO=true 00:00:33.116 VAGRANT_BOX_VERSION= 00:00:33.116 EXTRA_VAGRANTFILES= 00:00:33.116 NIC_MODEL=virtio 00:00:33.116 00:00:33.116 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:33.117 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:35.649 Bringing machine 'default' up with 'libvirt' provider... 00:00:36.216 ==> default: Creating image (snapshot of base box volume). 00:00:36.475 ==> default: Creating domain with the following settings... 00:00:36.475 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733741009_35f57f52d050f7969ad2 00:00:36.475 ==> default: -- Domain type: kvm 00:00:36.475 ==> default: -- Cpus: 10 00:00:36.475 ==> default: -- Feature: acpi 00:00:36.475 ==> default: -- Feature: apic 00:00:36.475 ==> default: -- Feature: pae 00:00:36.475 ==> default: -- Memory: 12288M 00:00:36.475 ==> default: -- Memory Backing: hugepages: 00:00:36.475 ==> default: -- Management MAC: 00:00:36.475 ==> default: -- Loader: 00:00:36.475 ==> default: -- Nvram: 00:00:36.475 ==> default: -- Base box: spdk/fedora39 00:00:36.475 ==> default: -- Storage pool: default 00:00:36.475 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733741009_35f57f52d050f7969ad2.img (20G) 00:00:36.475 ==> default: -- Volume Cache: default 00:00:36.475 ==> default: -- Kernel: 00:00:36.475 ==> default: -- Initrd: 00:00:36.475 ==> default: -- Graphics Type: vnc 00:00:36.475 ==> default: -- Graphics Port: -1 00:00:36.475 ==> default: -- Graphics IP: 127.0.0.1 00:00:36.475 ==> default: -- Graphics Password: Not defined 00:00:36.475 ==> default: -- Video Type: cirrus 00:00:36.475 ==> default: -- Video VRAM: 9216 00:00:36.475 ==> default: -- Sound Type: 00:00:36.475 ==> default: -- Keymap: en-us 00:00:36.475 ==> default: -- TPM Path: 00:00:36.475 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:36.475 ==> default: -- Command line args: 00:00:36.475 ==> default: -> value=-device, 00:00:36.475 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:36.475 ==> default: -> value=-drive, 00:00:36.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:36.475 ==> default: -> value=-device, 00:00:36.475 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.475 ==> default: -> value=-device, 00:00:36.475 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:36.475 ==> default: -> value=-drive, 00:00:36.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:36.475 ==> default: -> value=-device, 00:00:36.475 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.475 ==> default: -> value=-drive, 00:00:36.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:36.475 ==> default: -> value=-device, 00:00:36.475 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.475 ==> default: -> value=-drive, 00:00:36.475 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:36.475 ==> default: -> value=-device, 00:00:36.475 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:36.475 ==> default: Creating shared folders metadata... 00:00:36.475 ==> default: Starting domain. 00:00:37.855 ==> default: Waiting for domain to get an IP address... 00:00:55.964 ==> default: Waiting for SSH to become available... 00:00:55.964 ==> default: Configuring and enabling network interfaces... 00:01:01.232 default: SSH address: 192.168.121.27:22 00:01:01.232 default: SSH username: vagrant 00:01:01.232 default: SSH auth method: private key 00:01:03.130 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:11.257 ==> default: Mounting SSHFS shared folder... 00:01:13.788 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:13.788 ==> default: Checking Mount.. 00:01:15.168 ==> default: Folder Successfully Mounted! 00:01:15.168 ==> default: Running provisioner: file... 00:01:16.163 default: ~/.gitconfig => .gitconfig 00:01:16.438 00:01:16.438 SUCCESS! 00:01:16.438 00:01:16.438 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:16.438 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:16.438 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:16.438 00:01:16.447 [Pipeline] } 00:01:16.458 [Pipeline] // stage 00:01:16.465 [Pipeline] dir 00:01:16.465 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:16.466 [Pipeline] { 00:01:16.477 [Pipeline] catchError 00:01:16.479 [Pipeline] { 00:01:16.490 [Pipeline] sh 00:01:16.770 + vagrant ssh-config --host vagrant 00:01:16.770 + sed -ne /^Host/,$p 00:01:16.770 + tee ssh_conf 00:01:19.308 Host vagrant 00:01:19.308 HostName 192.168.121.27 00:01:19.308 User vagrant 00:01:19.308 Port 22 00:01:19.308 UserKnownHostsFile /dev/null 00:01:19.308 StrictHostKeyChecking no 00:01:19.308 PasswordAuthentication no 00:01:19.308 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:19.308 IdentitiesOnly yes 00:01:19.308 LogLevel FATAL 00:01:19.308 ForwardAgent yes 00:01:19.308 ForwardX11 yes 00:01:19.308 00:01:19.321 [Pipeline] withEnv 00:01:19.324 [Pipeline] { 00:01:19.337 [Pipeline] sh 00:01:19.620 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:19.620 source /etc/os-release 00:01:19.620 [[ -e /image.version ]] && img=$(< /image.version) 00:01:19.620 # Minimal, systemd-like check. 00:01:19.620 if [[ -e /.dockerenv ]]; then 00:01:19.620 # Clear garbage from the node's name: 00:01:19.620 # agt-er_autotest_547-896 -> autotest_547-896 00:01:19.620 # $HOSTNAME is the actual container id 00:01:19.620 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:19.620 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:19.620 # We can assume this is a mount from a host where container is running, 00:01:19.620 # so fetch its hostname to easily identify the target swarm worker. 00:01:19.620 container="$(< /etc/hostname) ($agent)" 00:01:19.620 else 00:01:19.620 # Fallback 00:01:19.620 container=$agent 00:01:19.620 fi 00:01:19.620 fi 00:01:19.620 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:19.620 00:01:19.960 [Pipeline] } 00:01:19.975 [Pipeline] // withEnv 00:01:19.983 [Pipeline] setCustomBuildProperty 00:01:19.997 [Pipeline] stage 00:01:19.998 [Pipeline] { (Tests) 00:01:20.014 [Pipeline] sh 00:01:20.294 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:20.564 [Pipeline] sh 00:01:20.841 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.113 [Pipeline] timeout 00:01:21.114 Timeout set to expire in 1 hr 0 min 00:01:21.116 [Pipeline] { 00:01:21.130 [Pipeline] sh 00:01:21.408 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:21.974 HEAD is now at 25cdf096c env: use 4-KiB memory mapping in no-huge mode 00:01:21.985 [Pipeline] sh 00:01:22.266 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:22.537 [Pipeline] sh 00:01:22.821 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:23.094 [Pipeline] sh 00:01:23.378 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:23.638 ++ readlink -f spdk_repo 00:01:23.638 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:23.638 + [[ -n /home/vagrant/spdk_repo ]] 00:01:23.638 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:23.638 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:23.638 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:23.638 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:23.638 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:23.638 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:23.638 + cd /home/vagrant/spdk_repo 00:01:23.638 + source /etc/os-release 00:01:23.638 ++ NAME='Fedora Linux' 00:01:23.638 ++ VERSION='39 (Cloud Edition)' 00:01:23.638 ++ ID=fedora 00:01:23.638 ++ VERSION_ID=39 00:01:23.638 ++ VERSION_CODENAME= 00:01:23.638 ++ PLATFORM_ID=platform:f39 00:01:23.638 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:23.638 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:23.638 ++ LOGO=fedora-logo-icon 00:01:23.638 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:23.638 ++ HOME_URL=https://fedoraproject.org/ 00:01:23.638 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:23.638 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:23.638 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:23.638 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:23.638 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:23.638 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:23.638 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:23.638 ++ SUPPORT_END=2024-11-12 00:01:23.638 ++ VARIANT='Cloud Edition' 00:01:23.638 ++ VARIANT_ID=cloud 00:01:23.638 + uname -a 00:01:23.638 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:23.638 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:24.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:24.205 Hugepages 00:01:24.205 node hugesize free / total 00:01:24.205 node0 1048576kB 0 / 0 00:01:24.205 node0 2048kB 0 / 0 00:01:24.205 00:01:24.206 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.206 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:24.206 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:24.206 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:24.206 + rm -f /tmp/spdk-ld-path 00:01:24.206 + source autorun-spdk.conf 00:01:24.206 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.206 ++ SPDK_TEST_NVMF=1 00:01:24.206 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.206 ++ SPDK_TEST_URING=1 00:01:24.206 ++ SPDK_TEST_USDT=1 00:01:24.206 ++ SPDK_RUN_UBSAN=1 00:01:24.206 ++ NET_TYPE=virt 00:01:24.206 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.206 ++ RUN_NIGHTLY=0 00:01:24.206 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.206 + [[ -n '' ]] 00:01:24.206 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:24.465 + for M in /var/spdk/build-*-manifest.txt 00:01:24.465 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:24.465 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.465 + for M in /var/spdk/build-*-manifest.txt 00:01:24.465 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.465 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.465 + for M in /var/spdk/build-*-manifest.txt 00:01:24.465 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.465 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.465 ++ uname 00:01:24.465 + [[ Linux == \L\i\n\u\x ]] 00:01:24.465 + sudo dmesg -T 00:01:24.465 + sudo dmesg --clear 00:01:24.465 + dmesg_pid=5433 00:01:24.465 + [[ Fedora Linux == FreeBSD ]] 00:01:24.465 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.465 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.465 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.465 + sudo dmesg -Tw 00:01:24.465 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.465 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.465 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.465 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.465 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.465 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.465 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.465 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.465 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.465 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.465 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.465 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.465 10:44:17 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:24.465 10:44:17 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.465 10:44:17 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:24.465 10:44:17 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:24.465 10:44:17 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.725 10:44:17 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:24.725 10:44:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.725 10:44:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:24.725 10:44:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.725 10:44:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.725 10:44:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.725 10:44:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.725 10:44:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.725 10:44:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.725 10:44:17 -- paths/export.sh@5 -- $ export PATH 00:01:24.725 10:44:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.725 10:44:17 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.725 10:44:17 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:24.725 10:44:17 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733741057.XXXXXX 00:01:24.725 10:44:17 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733741057.9Mk4sL 00:01:24.725 10:44:17 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:24.725 10:44:17 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:24.725 10:44:17 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.725 10:44:17 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.725 10:44:17 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.725 10:44:17 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:24.725 10:44:17 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:24.725 10:44:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.725 10:44:17 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:24.725 10:44:17 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:24.725 10:44:17 -- pm/common@17 -- $ local monitor 00:01:24.725 10:44:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.725 10:44:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.725 10:44:17 -- pm/common@25 -- $ sleep 1 00:01:24.725 10:44:17 -- pm/common@21 -- $ date +%s 00:01:24.725 10:44:17 -- pm/common@21 -- $ date +%s 00:01:24.725 10:44:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733741057 00:01:24.725 10:44:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733741057 00:01:24.725 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733741057_collect-cpu-load.pm.log 00:01:24.725 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733741057_collect-vmstat.pm.log 00:01:25.664 10:44:18 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:25.664 10:44:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.664 10:44:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.664 10:44:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:25.664 10:44:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.664 Mon Dec 9 10:44:18 AM UTC 2024 00:01:25.664 10:44:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.664 v25.01-pre-317-g25cdf096c 00:01:25.664 10:44:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.664 10:44:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.664 10:44:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.664 10:44:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:25.664 10:44:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:25.664 10:44:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.664 ************************************ 00:01:25.664 START TEST ubsan 00:01:25.664 ************************************ 00:01:25.664 using ubsan 00:01:25.664 10:44:18 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:25.664 00:01:25.664 real 0m0.000s 00:01:25.664 user 0m0.000s 00:01:25.664 sys 0m0.000s 00:01:25.664 10:44:18 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:25.664 10:44:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.664 ************************************ 00:01:25.664 END TEST ubsan 00:01:25.664 ************************************ 00:01:25.664 10:44:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.664 10:44:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.664 10:44:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.664 10:44:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.664 10:44:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.665 10:44:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:25.665 10:44:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:25.665 10:44:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:25.665 10:44:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:25.924 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:25.924 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:26.493 Using 'verbs' RDMA provider 00:01:42.314 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:57.195 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:57.195 Creating mk/config.mk...done. 00:01:57.195 Creating mk/cc.flags.mk...done. 00:01:57.195 Type 'make' to build. 00:01:57.195 10:44:50 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:57.195 10:44:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:57.195 10:44:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:57.195 10:44:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.195 ************************************ 00:01:57.195 START TEST make 00:01:57.195 ************************************ 00:01:57.195 10:44:50 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:57.453 make[1]: Nothing to be done for 'all'. 00:02:09.657 The Meson build system 00:02:09.657 Version: 1.5.0 00:02:09.657 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:09.657 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:09.657 Build type: native build 00:02:09.657 Program cat found: YES (/usr/bin/cat) 00:02:09.657 Project name: DPDK 00:02:09.657 Project version: 24.03.0 00:02:09.657 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.657 C linker for the host machine: cc ld.bfd 2.40-14 00:02:09.657 Host machine cpu family: x86_64 00:02:09.657 Host machine cpu: x86_64 00:02:09.657 Message: ## Building in Developer Mode ## 00:02:09.657 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:09.657 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:09.657 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:09.657 Program python3 found: YES (/usr/bin/python3) 00:02:09.657 Program cat found: YES (/usr/bin/cat) 00:02:09.657 Compiler for C supports arguments -march=native: YES 00:02:09.657 Checking for size of "void *" : 8 00:02:09.657 Checking for size of "void *" : 8 (cached) 00:02:09.657 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:09.657 Library m found: YES 00:02:09.657 Library numa found: YES 00:02:09.657 Has header "numaif.h" : YES 00:02:09.657 Library fdt found: NO 00:02:09.657 Library execinfo found: NO 00:02:09.657 Has header "execinfo.h" : YES 00:02:09.657 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.657 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:09.657 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:09.657 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:09.657 Run-time dependency openssl found: YES 3.1.1 00:02:09.658 Run-time dependency libpcap found: YES 1.10.4 00:02:09.658 Has header "pcap.h" with dependency libpcap: YES 00:02:09.658 Compiler for C supports arguments -Wcast-qual: YES 00:02:09.658 Compiler for C supports arguments -Wdeprecated: YES 00:02:09.658 Compiler for C supports arguments -Wformat: YES 00:02:09.658 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:09.658 Compiler for C supports arguments -Wformat-security: NO 00:02:09.658 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.658 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:09.658 Compiler for C supports arguments -Wnested-externs: YES 00:02:09.658 Compiler for C supports arguments -Wold-style-definition: YES 00:02:09.658 Compiler for C supports arguments -Wpointer-arith: YES 00:02:09.658 Compiler for C supports arguments -Wsign-compare: YES 00:02:09.658 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:09.658 Compiler for C supports arguments -Wundef: YES 00:02:09.658 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.658 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:09.658 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:09.658 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.658 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:09.658 Program objdump found: YES (/usr/bin/objdump) 00:02:09.658 Compiler for C supports arguments -mavx512f: YES 00:02:09.658 Checking if "AVX512 checking" compiles: YES 00:02:09.658 Fetching value of define "__SSE4_2__" : 1 00:02:09.658 Fetching value of define "__AES__" : 1 00:02:09.658 Fetching value of define "__AVX__" : 1 00:02:09.658 Fetching value of define "__AVX2__" : 1 00:02:09.658 Fetching value of define "__AVX512BW__" : 1 00:02:09.658 Fetching value of define "__AVX512CD__" : 1 00:02:09.658 Fetching value of define "__AVX512DQ__" : 1 00:02:09.658 Fetching value of define "__AVX512F__" : 1 00:02:09.658 Fetching value of define "__AVX512VL__" : 1 00:02:09.658 Fetching value of define "__PCLMUL__" : 1 00:02:09.658 Fetching value of define "__RDRND__" : 1 00:02:09.658 Fetching value of define "__RDSEED__" : 1 00:02:09.658 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:09.658 Fetching value of define "__znver1__" : (undefined) 00:02:09.658 Fetching value of define "__znver2__" : (undefined) 00:02:09.658 Fetching value of define "__znver3__" : (undefined) 00:02:09.658 Fetching value of define "__znver4__" : (undefined) 00:02:09.658 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:09.658 Message: lib/log: Defining dependency "log" 00:02:09.658 Message: lib/kvargs: Defining dependency "kvargs" 00:02:09.658 Message: lib/telemetry: Defining dependency "telemetry" 00:02:09.658 Checking for function "getentropy" : NO 00:02:09.658 Message: lib/eal: Defining dependency "eal" 00:02:09.658 Message: lib/ring: Defining dependency "ring" 00:02:09.658 Message: lib/rcu: Defining dependency "rcu" 00:02:09.658 Message: lib/mempool: Defining dependency "mempool" 00:02:09.658 Message: lib/mbuf: Defining dependency "mbuf" 00:02:09.658 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:09.658 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:09.658 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:09.658 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:09.658 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:09.658 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:09.658 Compiler for C supports arguments -mpclmul: YES 00:02:09.658 Compiler for C supports arguments -maes: YES 00:02:09.658 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:09.658 Compiler for C supports arguments -mavx512bw: YES 00:02:09.658 Compiler for C supports arguments -mavx512dq: YES 00:02:09.658 Compiler for C supports arguments -mavx512vl: YES 00:02:09.658 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:09.658 Compiler for C supports arguments -mavx2: YES 00:02:09.658 Compiler for C supports arguments -mavx: YES 00:02:09.658 Message: lib/net: Defining dependency "net" 00:02:09.658 Message: lib/meter: Defining dependency "meter" 00:02:09.658 Message: lib/ethdev: Defining dependency "ethdev" 00:02:09.658 Message: lib/pci: Defining dependency "pci" 00:02:09.658 Message: lib/cmdline: Defining dependency "cmdline" 00:02:09.658 Message: lib/hash: Defining dependency "hash" 00:02:09.658 Message: lib/timer: Defining dependency "timer" 00:02:09.658 Message: lib/compressdev: Defining dependency "compressdev" 00:02:09.658 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:09.658 Message: lib/dmadev: Defining dependency "dmadev" 00:02:09.658 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:09.658 Message: lib/power: Defining dependency "power" 00:02:09.658 Message: lib/reorder: Defining dependency "reorder" 00:02:09.658 Message: lib/security: Defining dependency "security" 00:02:09.658 Has header "linux/userfaultfd.h" : YES 00:02:09.658 Has header "linux/vduse.h" : YES 00:02:09.658 Message: lib/vhost: Defining dependency "vhost" 00:02:09.658 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:09.658 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:09.658 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:09.658 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:09.658 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:09.658 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:09.658 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:09.658 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:09.658 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:09.658 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:09.658 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:09.658 Configuring doxy-api-html.conf using configuration 00:02:09.658 Configuring doxy-api-man.conf using configuration 00:02:09.658 Program mandb found: YES (/usr/bin/mandb) 00:02:09.658 Program sphinx-build found: NO 00:02:09.658 Configuring rte_build_config.h using configuration 00:02:09.658 Message: 00:02:09.658 ================= 00:02:09.658 Applications Enabled 00:02:09.658 ================= 00:02:09.658 00:02:09.658 apps: 00:02:09.658 00:02:09.658 00:02:09.658 Message: 00:02:09.658 ================= 00:02:09.658 Libraries Enabled 00:02:09.658 ================= 00:02:09.658 00:02:09.658 libs: 00:02:09.658 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:09.658 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:09.658 cryptodev, dmadev, power, reorder, security, vhost, 00:02:09.658 00:02:09.658 Message: 00:02:09.658 =============== 00:02:09.658 Drivers Enabled 00:02:09.658 =============== 00:02:09.658 00:02:09.658 common: 00:02:09.658 00:02:09.658 bus: 00:02:09.658 pci, vdev, 00:02:09.658 mempool: 00:02:09.658 ring, 00:02:09.658 dma: 00:02:09.658 00:02:09.658 net: 00:02:09.658 00:02:09.658 crypto: 00:02:09.658 00:02:09.658 compress: 00:02:09.658 00:02:09.658 vdpa: 00:02:09.658 00:02:09.658 00:02:09.658 Message: 00:02:09.658 ================= 00:02:09.658 Content Skipped 00:02:09.658 ================= 00:02:09.658 00:02:09.658 apps: 00:02:09.658 dumpcap: explicitly disabled via build config 00:02:09.658 graph: explicitly disabled via build config 00:02:09.658 pdump: explicitly disabled via build config 00:02:09.658 proc-info: explicitly disabled via build config 00:02:09.658 test-acl: explicitly disabled via build config 00:02:09.658 test-bbdev: explicitly disabled via build config 00:02:09.658 test-cmdline: explicitly disabled via build config 00:02:09.658 test-compress-perf: explicitly disabled via build config 00:02:09.658 test-crypto-perf: explicitly disabled via build config 00:02:09.658 test-dma-perf: explicitly disabled via build config 00:02:09.658 test-eventdev: explicitly disabled via build config 00:02:09.658 test-fib: explicitly disabled via build config 00:02:09.658 test-flow-perf: explicitly disabled via build config 00:02:09.658 test-gpudev: explicitly disabled via build config 00:02:09.658 test-mldev: explicitly disabled via build config 00:02:09.658 test-pipeline: explicitly disabled via build config 00:02:09.658 test-pmd: explicitly disabled via build config 00:02:09.658 test-regex: explicitly disabled via build config 00:02:09.658 test-sad: explicitly disabled via build config 00:02:09.658 test-security-perf: explicitly disabled via build config 00:02:09.658 00:02:09.658 libs: 00:02:09.658 argparse: explicitly disabled via build config 00:02:09.658 metrics: explicitly disabled via build config 00:02:09.658 acl: explicitly disabled via build config 00:02:09.658 bbdev: explicitly disabled via build config 00:02:09.658 bitratestats: explicitly disabled via build config 00:02:09.658 bpf: explicitly disabled via build config 00:02:09.658 cfgfile: explicitly disabled via build config 00:02:09.658 distributor: explicitly disabled via build config 00:02:09.658 efd: explicitly disabled via build config 00:02:09.658 eventdev: explicitly disabled via build config 00:02:09.658 dispatcher: explicitly disabled via build config 00:02:09.658 gpudev: explicitly disabled via build config 00:02:09.658 gro: explicitly disabled via build config 00:02:09.658 gso: explicitly disabled via build config 00:02:09.658 ip_frag: explicitly disabled via build config 00:02:09.658 jobstats: explicitly disabled via build config 00:02:09.658 latencystats: explicitly disabled via build config 00:02:09.658 lpm: explicitly disabled via build config 00:02:09.658 member: explicitly disabled via build config 00:02:09.658 pcapng: explicitly disabled via build config 00:02:09.658 rawdev: explicitly disabled via build config 00:02:09.658 regexdev: explicitly disabled via build config 00:02:09.658 mldev: explicitly disabled via build config 00:02:09.658 rib: explicitly disabled via build config 00:02:09.658 sched: explicitly disabled via build config 00:02:09.658 stack: explicitly disabled via build config 00:02:09.658 ipsec: explicitly disabled via build config 00:02:09.658 pdcp: explicitly disabled via build config 00:02:09.658 fib: explicitly disabled via build config 00:02:09.658 port: explicitly disabled via build config 00:02:09.658 pdump: explicitly disabled via build config 00:02:09.658 table: explicitly disabled via build config 00:02:09.658 pipeline: explicitly disabled via build config 00:02:09.659 graph: explicitly disabled via build config 00:02:09.659 node: explicitly disabled via build config 00:02:09.659 00:02:09.659 drivers: 00:02:09.659 common/cpt: not in enabled drivers build config 00:02:09.659 common/dpaax: not in enabled drivers build config 00:02:09.659 common/iavf: not in enabled drivers build config 00:02:09.659 common/idpf: not in enabled drivers build config 00:02:09.659 common/ionic: not in enabled drivers build config 00:02:09.659 common/mvep: not in enabled drivers build config 00:02:09.659 common/octeontx: not in enabled drivers build config 00:02:09.659 bus/auxiliary: not in enabled drivers build config 00:02:09.659 bus/cdx: not in enabled drivers build config 00:02:09.659 bus/dpaa: not in enabled drivers build config 00:02:09.659 bus/fslmc: not in enabled drivers build config 00:02:09.659 bus/ifpga: not in enabled drivers build config 00:02:09.659 bus/platform: not in enabled drivers build config 00:02:09.659 bus/uacce: not in enabled drivers build config 00:02:09.659 bus/vmbus: not in enabled drivers build config 00:02:09.659 common/cnxk: not in enabled drivers build config 00:02:09.659 common/mlx5: not in enabled drivers build config 00:02:09.659 common/nfp: not in enabled drivers build config 00:02:09.659 common/nitrox: not in enabled drivers build config 00:02:09.659 common/qat: not in enabled drivers build config 00:02:09.659 common/sfc_efx: not in enabled drivers build config 00:02:09.659 mempool/bucket: not in enabled drivers build config 00:02:09.659 mempool/cnxk: not in enabled drivers build config 00:02:09.659 mempool/dpaa: not in enabled drivers build config 00:02:09.659 mempool/dpaa2: not in enabled drivers build config 00:02:09.659 mempool/octeontx: not in enabled drivers build config 00:02:09.659 mempool/stack: not in enabled drivers build config 00:02:09.659 dma/cnxk: not in enabled drivers build config 00:02:09.659 dma/dpaa: not in enabled drivers build config 00:02:09.659 dma/dpaa2: not in enabled drivers build config 00:02:09.659 dma/hisilicon: not in enabled drivers build config 00:02:09.659 dma/idxd: not in enabled drivers build config 00:02:09.659 dma/ioat: not in enabled drivers build config 00:02:09.659 dma/skeleton: not in enabled drivers build config 00:02:09.659 net/af_packet: not in enabled drivers build config 00:02:09.659 net/af_xdp: not in enabled drivers build config 00:02:09.659 net/ark: not in enabled drivers build config 00:02:09.659 net/atlantic: not in enabled drivers build config 00:02:09.659 net/avp: not in enabled drivers build config 00:02:09.659 net/axgbe: not in enabled drivers build config 00:02:09.659 net/bnx2x: not in enabled drivers build config 00:02:09.659 net/bnxt: not in enabled drivers build config 00:02:09.659 net/bonding: not in enabled drivers build config 00:02:09.659 net/cnxk: not in enabled drivers build config 00:02:09.659 net/cpfl: not in enabled drivers build config 00:02:09.659 net/cxgbe: not in enabled drivers build config 00:02:09.659 net/dpaa: not in enabled drivers build config 00:02:09.659 net/dpaa2: not in enabled drivers build config 00:02:09.659 net/e1000: not in enabled drivers build config 00:02:09.659 net/ena: not in enabled drivers build config 00:02:09.659 net/enetc: not in enabled drivers build config 00:02:09.659 net/enetfec: not in enabled drivers build config 00:02:09.659 net/enic: not in enabled drivers build config 00:02:09.659 net/failsafe: not in enabled drivers build config 00:02:09.659 net/fm10k: not in enabled drivers build config 00:02:09.659 net/gve: not in enabled drivers build config 00:02:09.659 net/hinic: not in enabled drivers build config 00:02:09.659 net/hns3: not in enabled drivers build config 00:02:09.659 net/i40e: not in enabled drivers build config 00:02:09.659 net/iavf: not in enabled drivers build config 00:02:09.659 net/ice: not in enabled drivers build config 00:02:09.659 net/idpf: not in enabled drivers build config 00:02:09.659 net/igc: not in enabled drivers build config 00:02:09.659 net/ionic: not in enabled drivers build config 00:02:09.659 net/ipn3ke: not in enabled drivers build config 00:02:09.659 net/ixgbe: not in enabled drivers build config 00:02:09.659 net/mana: not in enabled drivers build config 00:02:09.659 net/memif: not in enabled drivers build config 00:02:09.659 net/mlx4: not in enabled drivers build config 00:02:09.659 net/mlx5: not in enabled drivers build config 00:02:09.659 net/mvneta: not in enabled drivers build config 00:02:09.659 net/mvpp2: not in enabled drivers build config 00:02:09.659 net/netvsc: not in enabled drivers build config 00:02:09.659 net/nfb: not in enabled drivers build config 00:02:09.659 net/nfp: not in enabled drivers build config 00:02:09.659 net/ngbe: not in enabled drivers build config 00:02:09.659 net/null: not in enabled drivers build config 00:02:09.659 net/octeontx: not in enabled drivers build config 00:02:09.659 net/octeon_ep: not in enabled drivers build config 00:02:09.659 net/pcap: not in enabled drivers build config 00:02:09.659 net/pfe: not in enabled drivers build config 00:02:09.659 net/qede: not in enabled drivers build config 00:02:09.659 net/ring: not in enabled drivers build config 00:02:09.659 net/sfc: not in enabled drivers build config 00:02:09.659 net/softnic: not in enabled drivers build config 00:02:09.659 net/tap: not in enabled drivers build config 00:02:09.659 net/thunderx: not in enabled drivers build config 00:02:09.659 net/txgbe: not in enabled drivers build config 00:02:09.659 net/vdev_netvsc: not in enabled drivers build config 00:02:09.659 net/vhost: not in enabled drivers build config 00:02:09.659 net/virtio: not in enabled drivers build config 00:02:09.659 net/vmxnet3: not in enabled drivers build config 00:02:09.659 raw/*: missing internal dependency, "rawdev" 00:02:09.659 crypto/armv8: not in enabled drivers build config 00:02:09.659 crypto/bcmfs: not in enabled drivers build config 00:02:09.659 crypto/caam_jr: not in enabled drivers build config 00:02:09.659 crypto/ccp: not in enabled drivers build config 00:02:09.659 crypto/cnxk: not in enabled drivers build config 00:02:09.659 crypto/dpaa_sec: not in enabled drivers build config 00:02:09.659 crypto/dpaa2_sec: not in enabled drivers build config 00:02:09.659 crypto/ipsec_mb: not in enabled drivers build config 00:02:09.659 crypto/mlx5: not in enabled drivers build config 00:02:09.659 crypto/mvsam: not in enabled drivers build config 00:02:09.659 crypto/nitrox: not in enabled drivers build config 00:02:09.659 crypto/null: not in enabled drivers build config 00:02:09.659 crypto/octeontx: not in enabled drivers build config 00:02:09.659 crypto/openssl: not in enabled drivers build config 00:02:09.659 crypto/scheduler: not in enabled drivers build config 00:02:09.659 crypto/uadk: not in enabled drivers build config 00:02:09.659 crypto/virtio: not in enabled drivers build config 00:02:09.659 compress/isal: not in enabled drivers build config 00:02:09.659 compress/mlx5: not in enabled drivers build config 00:02:09.659 compress/nitrox: not in enabled drivers build config 00:02:09.659 compress/octeontx: not in enabled drivers build config 00:02:09.659 compress/zlib: not in enabled drivers build config 00:02:09.659 regex/*: missing internal dependency, "regexdev" 00:02:09.659 ml/*: missing internal dependency, "mldev" 00:02:09.659 vdpa/ifc: not in enabled drivers build config 00:02:09.659 vdpa/mlx5: not in enabled drivers build config 00:02:09.659 vdpa/nfp: not in enabled drivers build config 00:02:09.659 vdpa/sfc: not in enabled drivers build config 00:02:09.659 event/*: missing internal dependency, "eventdev" 00:02:09.659 baseband/*: missing internal dependency, "bbdev" 00:02:09.659 gpu/*: missing internal dependency, "gpudev" 00:02:09.659 00:02:09.659 00:02:09.659 Build targets in project: 85 00:02:09.659 00:02:09.659 DPDK 24.03.0 00:02:09.659 00:02:09.659 User defined options 00:02:09.659 buildtype : debug 00:02:09.659 default_library : shared 00:02:09.659 libdir : lib 00:02:09.659 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:09.659 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:09.659 c_link_args : 00:02:09.659 cpu_instruction_set: native 00:02:09.659 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:09.659 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:09.659 enable_docs : false 00:02:09.659 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:09.659 enable_kmods : false 00:02:09.659 max_lcores : 128 00:02:09.659 tests : false 00:02:09.659 00:02:09.659 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.659 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:09.659 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:09.659 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:09.659 [3/268] Linking static target lib/librte_kvargs.a 00:02:09.659 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:09.659 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:09.659 [6/268] Linking static target lib/librte_log.a 00:02:09.659 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:09.659 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:09.659 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:09.659 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:09.659 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:09.659 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:09.659 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.659 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:09.659 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:09.659 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:09.659 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:09.659 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.659 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.659 [20/268] Linking static target lib/librte_telemetry.a 00:02:09.659 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.659 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:09.659 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:09.659 [24/268] Linking target lib/librte_log.so.24.1 00:02:09.918 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:09.918 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:09.918 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.918 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:10.176 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:10.176 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.176 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:10.176 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:10.176 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:10.434 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:10.434 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:10.434 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.435 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:10.435 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.435 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.435 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.435 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.693 [42/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.693 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:10.693 [44/268] Linking target lib/librte_telemetry.so.24.1 00:02:10.693 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:10.693 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:10.951 [47/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:10.951 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:10.951 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:10.951 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:10.952 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:11.210 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.210 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.210 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.210 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.210 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.468 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.468 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.468 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.468 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.468 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.468 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:11.726 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:11.726 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.984 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.984 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.984 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:11.984 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:12.242 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:12.242 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:12.242 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:12.242 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:12.242 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.242 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.242 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.242 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:12.500 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:12.500 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:12.500 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.757 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:12.757 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:12.757 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.757 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.017 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.017 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.017 [86/268] Linking static target lib/librte_eal.a 00:02:13.017 [87/268] Linking static target lib/librte_ring.a 00:02:13.017 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.017 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.281 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.281 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.281 [92/268] Linking static target lib/librte_rcu.a 00:02:13.281 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.538 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.538 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.538 [96/268] Linking static target lib/librte_mempool.a 00:02:13.538 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.795 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.796 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.796 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.796 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.796 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.796 [103/268] Linking static target lib/librte_mbuf.a 00:02:14.053 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.053 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:14.053 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:14.053 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:14.310 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:14.310 [109/268] Linking static target lib/librte_net.a 00:02:14.310 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:14.310 [111/268] Linking static target lib/librte_meter.a 00:02:14.310 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.568 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:14.568 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:14.568 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.826 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.826 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.826 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:14.826 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.083 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:15.083 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.083 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:15.339 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:15.597 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:15.597 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:15.597 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.597 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:15.597 [128/268] Linking static target lib/librte_pci.a 00:02:15.597 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:15.597 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:15.597 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:15.597 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:15.597 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:15.854 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:15.854 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:15.854 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:15.854 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:15.854 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:15.854 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:15.854 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:15.854 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:15.854 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:15.854 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:15.854 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.111 [145/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.111 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.111 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.111 [148/268] Linking static target lib/librte_cmdline.a 00:02:16.368 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:16.368 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.368 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:16.626 [152/268] Linking static target lib/librte_ethdev.a 00:02:16.626 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.626 [154/268] Linking static target lib/librte_timer.a 00:02:16.626 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.626 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.626 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.626 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:16.626 [159/268] Linking static target lib/librte_hash.a 00:02:16.895 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.895 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.153 [162/268] Linking static target lib/librte_compressdev.a 00:02:17.153 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.153 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.154 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:17.154 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.412 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.412 [168/268] Linking static target lib/librte_dmadev.a 00:02:17.412 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:17.412 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.672 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:17.672 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.672 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.672 [174/268] Linking static target lib/librte_cryptodev.a 00:02:17.672 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.932 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.932 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.932 [178/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.932 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:18.191 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:18.191 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:18.191 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:18.191 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.191 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:18.450 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:18.450 [186/268] Linking static target lib/librte_power.a 00:02:18.450 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:18.450 [188/268] Linking static target lib/librte_reorder.a 00:02:18.711 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:18.711 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:18.711 [191/268] Linking static target lib/librte_security.a 00:02:18.711 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:18.711 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:18.969 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:18.969 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.535 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.535 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:19.535 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.535 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.535 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:19.535 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:19.794 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.051 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.051 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.051 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.051 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:20.051 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.051 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.051 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:20.309 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.309 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.309 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.309 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.309 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:20.309 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.567 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:20.567 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.567 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.567 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:20.567 [220/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.567 [221/268] Linking static target drivers/librte_bus_vdev.a 00:02:20.567 [222/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.567 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.825 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.825 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.825 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:20.825 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.825 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.805 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:21.805 [230/268] Linking static target lib/librte_vhost.a 00:02:23.706 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.964 [232/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.223 [233/268] Linking target lib/librte_eal.so.24.1 00:02:24.223 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:24.223 [235/268] Linking target lib/librte_ring.so.24.1 00:02:24.223 [236/268] Linking target lib/librte_meter.so.24.1 00:02:24.223 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:24.223 [238/268] Linking target lib/librte_timer.so.24.1 00:02:24.223 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:24.223 [240/268] Linking target lib/librte_pci.so.24.1 00:02:24.481 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:24.481 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:24.481 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:24.481 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:24.481 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:24.481 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:24.481 [247/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:24.481 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:24.481 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:24.481 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:24.741 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:24.741 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:24.741 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:24.741 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:24.741 [255/268] Linking target lib/librte_net.so.24.1 00:02:24.741 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:24.741 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:25.001 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:25.001 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:25.001 [260/268] Linking target lib/librte_cmdline.so.24.1 00:02:25.001 [261/268] Linking target lib/librte_hash.so.24.1 00:02:25.001 [262/268] Linking target lib/librte_security.so.24.1 00:02:25.261 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:25.261 [264/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.261 [265/268] Linking target lib/librte_ethdev.so.24.1 00:02:25.575 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:25.575 [267/268] Linking target lib/librte_power.so.24.1 00:02:25.575 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:25.575 INFO: autodetecting backend as ninja 00:02:25.575 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:52.133 CC lib/ut_mock/mock.o 00:02:52.133 CC lib/log/log.o 00:02:52.133 CC lib/log/log_deprecated.o 00:02:52.133 CC lib/log/log_flags.o 00:02:52.133 CC lib/ut/ut.o 00:02:52.133 LIB libspdk_ut_mock.a 00:02:52.133 LIB libspdk_log.a 00:02:52.133 SO libspdk_ut_mock.so.6.0 00:02:52.133 LIB libspdk_ut.a 00:02:52.133 SYMLINK libspdk_ut_mock.so 00:02:52.133 SO libspdk_log.so.7.1 00:02:52.133 SO libspdk_ut.so.2.0 00:02:52.133 SYMLINK libspdk_log.so 00:02:52.133 SYMLINK libspdk_ut.so 00:02:52.133 CC lib/ioat/ioat.o 00:02:52.133 CXX lib/trace_parser/trace.o 00:02:52.133 CC lib/util/base64.o 00:02:52.133 CC lib/util/bit_array.o 00:02:52.133 CC lib/util/cpuset.o 00:02:52.133 CC lib/util/crc32.o 00:02:52.133 CC lib/dma/dma.o 00:02:52.133 CC lib/util/crc32c.o 00:02:52.133 CC lib/util/crc16.o 00:02:52.133 CC lib/vfio_user/host/vfio_user_pci.o 00:02:52.133 CC lib/vfio_user/host/vfio_user.o 00:02:52.133 CC lib/util/crc32_ieee.o 00:02:52.133 CC lib/util/crc64.o 00:02:52.133 CC lib/util/dif.o 00:02:52.133 CC lib/util/fd.o 00:02:52.133 LIB libspdk_dma.a 00:02:52.133 SO libspdk_dma.so.5.0 00:02:52.133 SYMLINK libspdk_dma.so 00:02:52.133 CC lib/util/fd_group.o 00:02:52.133 CC lib/util/file.o 00:02:52.133 CC lib/util/hexlify.o 00:02:52.133 CC lib/util/iov.o 00:02:52.133 LIB libspdk_ioat.a 00:02:52.133 CC lib/util/math.o 00:02:52.133 CC lib/util/net.o 00:02:52.133 SO libspdk_ioat.so.7.0 00:02:52.133 LIB libspdk_vfio_user.a 00:02:52.133 SO libspdk_vfio_user.so.5.0 00:02:52.133 SYMLINK libspdk_ioat.so 00:02:52.133 CC lib/util/pipe.o 00:02:52.133 CC lib/util/strerror_tls.o 00:02:52.133 CC lib/util/string.o 00:02:52.133 CC lib/util/uuid.o 00:02:52.133 SYMLINK libspdk_vfio_user.so 00:02:52.133 CC lib/util/xor.o 00:02:52.133 CC lib/util/zipf.o 00:02:52.133 CC lib/util/md5.o 00:02:52.133 LIB libspdk_util.a 00:02:52.133 SO libspdk_util.so.10.1 00:02:52.133 LIB libspdk_trace_parser.a 00:02:52.133 SYMLINK libspdk_util.so 00:02:52.133 SO libspdk_trace_parser.so.6.0 00:02:52.133 SYMLINK libspdk_trace_parser.so 00:02:52.133 CC lib/vmd/vmd.o 00:02:52.133 CC lib/vmd/led.o 00:02:52.133 CC lib/env_dpdk/env.o 00:02:52.133 CC lib/env_dpdk/pci.o 00:02:52.133 CC lib/env_dpdk/init.o 00:02:52.133 CC lib/idxd/idxd.o 00:02:52.133 CC lib/env_dpdk/memory.o 00:02:52.133 CC lib/conf/conf.o 00:02:52.133 CC lib/json/json_parse.o 00:02:52.133 CC lib/rdma_utils/rdma_utils.o 00:02:52.133 CC lib/idxd/idxd_user.o 00:02:52.133 LIB libspdk_conf.a 00:02:52.133 CC lib/json/json_util.o 00:02:52.133 SO libspdk_conf.so.6.0 00:02:52.133 LIB libspdk_rdma_utils.a 00:02:52.133 SO libspdk_rdma_utils.so.1.0 00:02:52.133 SYMLINK libspdk_conf.so 00:02:52.133 CC lib/idxd/idxd_kernel.o 00:02:52.133 CC lib/json/json_write.o 00:02:52.133 SYMLINK libspdk_rdma_utils.so 00:02:52.133 CC lib/env_dpdk/threads.o 00:02:52.133 CC lib/env_dpdk/pci_ioat.o 00:02:52.133 CC lib/env_dpdk/pci_virtio.o 00:02:52.133 CC lib/env_dpdk/pci_vmd.o 00:02:52.133 CC lib/rdma_provider/common.o 00:02:52.133 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.133 CC lib/env_dpdk/pci_idxd.o 00:02:52.133 LIB libspdk_vmd.a 00:02:52.133 SO libspdk_vmd.so.6.0 00:02:52.133 LIB libspdk_json.a 00:02:52.133 LIB libspdk_idxd.a 00:02:52.133 CC lib/env_dpdk/pci_event.o 00:02:52.133 CC lib/env_dpdk/sigbus_handler.o 00:02:52.133 SO libspdk_json.so.6.0 00:02:52.133 SO libspdk_idxd.so.12.1 00:02:52.133 SYMLINK libspdk_vmd.so 00:02:52.133 CC lib/env_dpdk/pci_dpdk.o 00:02:52.133 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.133 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.133 SYMLINK libspdk_json.so 00:02:52.133 LIB libspdk_rdma_provider.a 00:02:52.133 SYMLINK libspdk_idxd.so 00:02:52.133 SO libspdk_rdma_provider.so.7.0 00:02:52.133 SYMLINK libspdk_rdma_provider.so 00:02:52.133 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.133 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.133 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.133 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.133 LIB libspdk_jsonrpc.a 00:02:52.133 SO libspdk_jsonrpc.so.6.0 00:02:52.133 SYMLINK libspdk_jsonrpc.so 00:02:52.133 LIB libspdk_env_dpdk.a 00:02:52.133 SO libspdk_env_dpdk.so.15.1 00:02:52.133 SYMLINK libspdk_env_dpdk.so 00:02:52.133 CC lib/rpc/rpc.o 00:02:52.133 LIB libspdk_rpc.a 00:02:52.133 SO libspdk_rpc.so.6.0 00:02:52.133 SYMLINK libspdk_rpc.so 00:02:52.392 CC lib/trace/trace_flags.o 00:02:52.392 CC lib/trace/trace.o 00:02:52.392 CC lib/trace/trace_rpc.o 00:02:52.392 CC lib/keyring/keyring.o 00:02:52.392 CC lib/keyring/keyring_rpc.o 00:02:52.392 CC lib/notify/notify.o 00:02:52.392 CC lib/notify/notify_rpc.o 00:02:52.392 LIB libspdk_notify.a 00:02:52.650 LIB libspdk_trace.a 00:02:52.650 LIB libspdk_keyring.a 00:02:52.650 SO libspdk_notify.so.6.0 00:02:52.650 SO libspdk_trace.so.11.0 00:02:52.650 SO libspdk_keyring.so.2.0 00:02:52.650 SYMLINK libspdk_notify.so 00:02:52.650 SYMLINK libspdk_keyring.so 00:02:52.650 SYMLINK libspdk_trace.so 00:02:52.912 CC lib/sock/sock.o 00:02:52.912 CC lib/sock/sock_rpc.o 00:02:53.176 CC lib/thread/thread.o 00:02:53.176 CC lib/thread/iobuf.o 00:02:53.443 LIB libspdk_sock.a 00:02:53.443 SO libspdk_sock.so.10.0 00:02:53.443 SYMLINK libspdk_sock.so 00:02:54.030 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.030 CC lib/nvme/nvme_ctrlr.o 00:02:54.030 CC lib/nvme/nvme_ns.o 00:02:54.030 CC lib/nvme/nvme_fabric.o 00:02:54.030 CC lib/nvme/nvme_ns_cmd.o 00:02:54.030 CC lib/nvme/nvme_pcie.o 00:02:54.030 CC lib/nvme/nvme.o 00:02:54.030 CC lib/nvme/nvme_qpair.o 00:02:54.030 CC lib/nvme/nvme_pcie_common.o 00:02:54.301 LIB libspdk_thread.a 00:02:54.576 SO libspdk_thread.so.11.0 00:02:54.576 SYMLINK libspdk_thread.so 00:02:54.576 CC lib/nvme/nvme_quirks.o 00:02:54.576 CC lib/nvme/nvme_transport.o 00:02:54.576 CC lib/nvme/nvme_discovery.o 00:02:54.576 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.843 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.843 CC lib/nvme/nvme_tcp.o 00:02:54.843 CC lib/nvme/nvme_opal.o 00:02:54.843 CC lib/nvme/nvme_io_msg.o 00:02:55.110 CC lib/nvme/nvme_poll_group.o 00:02:55.110 CC lib/nvme/nvme_zns.o 00:02:55.380 CC lib/nvme/nvme_stubs.o 00:02:55.380 CC lib/nvme/nvme_auth.o 00:02:55.380 CC lib/nvme/nvme_cuse.o 00:02:55.380 CC lib/nvme/nvme_rdma.o 00:02:55.653 CC lib/accel/accel.o 00:02:55.653 CC lib/accel/accel_rpc.o 00:02:55.653 CC lib/accel/accel_sw.o 00:02:55.923 CC lib/blob/blobstore.o 00:02:55.923 CC lib/init/json_config.o 00:02:55.923 CC lib/virtio/virtio.o 00:02:55.923 CC lib/virtio/virtio_vhost_user.o 00:02:55.923 CC lib/virtio/virtio_vfio_user.o 00:02:56.193 CC lib/init/subsystem.o 00:02:56.193 CC lib/init/subsystem_rpc.o 00:02:56.193 CC lib/virtio/virtio_pci.o 00:02:56.193 CC lib/init/rpc.o 00:02:56.193 CC lib/blob/request.o 00:02:56.193 CC lib/blob/zeroes.o 00:02:56.193 CC lib/blob/blob_bs_dev.o 00:02:56.464 LIB libspdk_init.a 00:02:56.464 SO libspdk_init.so.6.0 00:02:56.464 LIB libspdk_virtio.a 00:02:56.464 SYMLINK libspdk_init.so 00:02:56.464 CC lib/fsdev/fsdev.o 00:02:56.464 CC lib/fsdev/fsdev_io.o 00:02:56.464 CC lib/fsdev/fsdev_rpc.o 00:02:56.464 SO libspdk_virtio.so.7.0 00:02:56.464 LIB libspdk_accel.a 00:02:56.464 SYMLINK libspdk_virtio.so 00:02:56.739 SO libspdk_accel.so.16.0 00:02:56.739 LIB libspdk_nvme.a 00:02:56.739 SYMLINK libspdk_accel.so 00:02:56.739 CC lib/event/app.o 00:02:56.739 CC lib/event/reactor.o 00:02:56.739 CC lib/event/log_rpc.o 00:02:56.739 SO libspdk_nvme.so.15.0 00:02:56.739 CC lib/event/scheduler_static.o 00:02:56.739 CC lib/event/app_rpc.o 00:02:57.043 CC lib/bdev/bdev.o 00:02:57.043 CC lib/bdev/bdev_rpc.o 00:02:57.043 CC lib/bdev/bdev_zone.o 00:02:57.043 CC lib/bdev/part.o 00:02:57.043 SYMLINK libspdk_nvme.so 00:02:57.043 CC lib/bdev/scsi_nvme.o 00:02:57.043 LIB libspdk_fsdev.a 00:02:57.335 SO libspdk_fsdev.so.2.0 00:02:57.335 SYMLINK libspdk_fsdev.so 00:02:57.335 LIB libspdk_event.a 00:02:57.335 SO libspdk_event.so.14.0 00:02:57.335 SYMLINK libspdk_event.so 00:02:57.595 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.163 LIB libspdk_fuse_dispatcher.a 00:02:58.163 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.163 SYMLINK libspdk_fuse_dispatcher.so 00:02:58.732 LIB libspdk_blob.a 00:02:58.732 SO libspdk_blob.so.12.0 00:02:58.732 SYMLINK libspdk_blob.so 00:02:58.992 CC lib/lvol/lvol.o 00:02:59.252 CC lib/blobfs/blobfs.o 00:02:59.252 CC lib/blobfs/tree.o 00:02:59.821 LIB libspdk_bdev.a 00:02:59.821 SO libspdk_bdev.so.17.0 00:02:59.821 LIB libspdk_blobfs.a 00:02:59.821 SYMLINK libspdk_bdev.so 00:02:59.821 SO libspdk_blobfs.so.11.0 00:02:59.821 LIB libspdk_lvol.a 00:03:00.078 SO libspdk_lvol.so.11.0 00:03:00.078 SYMLINK libspdk_blobfs.so 00:03:00.078 SYMLINK libspdk_lvol.so 00:03:00.079 CC lib/ublk/ublk_rpc.o 00:03:00.079 CC lib/ublk/ublk.o 00:03:00.079 CC lib/nvmf/ctrlr_discovery.o 00:03:00.079 CC lib/nvmf/subsystem.o 00:03:00.079 CC lib/nvmf/ctrlr_bdev.o 00:03:00.079 CC lib/nvmf/ctrlr.o 00:03:00.079 CC lib/nvmf/nvmf.o 00:03:00.079 CC lib/ftl/ftl_core.o 00:03:00.079 CC lib/nbd/nbd.o 00:03:00.079 CC lib/scsi/dev.o 00:03:00.336 CC lib/scsi/lun.o 00:03:00.336 CC lib/nbd/nbd_rpc.o 00:03:00.594 CC lib/ftl/ftl_init.o 00:03:00.594 CC lib/scsi/port.o 00:03:00.594 LIB libspdk_nbd.a 00:03:00.594 CC lib/scsi/scsi.o 00:03:00.594 SO libspdk_nbd.so.7.0 00:03:00.594 CC lib/nvmf/nvmf_rpc.o 00:03:00.594 SYMLINK libspdk_nbd.so 00:03:00.594 CC lib/nvmf/transport.o 00:03:00.594 CC lib/ftl/ftl_layout.o 00:03:00.594 CC lib/ftl/ftl_debug.o 00:03:00.594 LIB libspdk_ublk.a 00:03:00.594 CC lib/scsi/scsi_bdev.o 00:03:00.853 SO libspdk_ublk.so.3.0 00:03:00.853 CC lib/nvmf/tcp.o 00:03:00.853 SYMLINK libspdk_ublk.so 00:03:00.853 CC lib/ftl/ftl_io.o 00:03:00.853 CC lib/ftl/ftl_sb.o 00:03:01.112 CC lib/nvmf/stubs.o 00:03:01.112 CC lib/ftl/ftl_l2p.o 00:03:01.112 CC lib/ftl/ftl_l2p_flat.o 00:03:01.112 CC lib/nvmf/mdns_server.o 00:03:01.112 CC lib/scsi/scsi_pr.o 00:03:01.371 CC lib/scsi/scsi_rpc.o 00:03:01.371 CC lib/ftl/ftl_nv_cache.o 00:03:01.371 CC lib/ftl/ftl_band.o 00:03:01.371 CC lib/nvmf/rdma.o 00:03:01.371 CC lib/ftl/ftl_band_ops.o 00:03:01.371 CC lib/scsi/task.o 00:03:01.371 CC lib/ftl/ftl_writer.o 00:03:01.371 CC lib/ftl/ftl_rq.o 00:03:01.634 CC lib/nvmf/auth.o 00:03:01.634 LIB libspdk_scsi.a 00:03:01.634 CC lib/ftl/ftl_reloc.o 00:03:01.634 CC lib/ftl/ftl_l2p_cache.o 00:03:01.634 SO libspdk_scsi.so.9.0 00:03:01.634 CC lib/ftl/ftl_p2l.o 00:03:01.634 CC lib/ftl/ftl_p2l_log.o 00:03:01.634 CC lib/ftl/mngt/ftl_mngt.o 00:03:01.893 SYMLINK libspdk_scsi.so 00:03:01.893 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.152 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.152 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.152 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.152 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.152 CC lib/vhost/vhost.o 00:03:02.152 CC lib/iscsi/conn.o 00:03:02.152 CC lib/iscsi/init_grp.o 00:03:02.152 CC lib/iscsi/iscsi.o 00:03:02.152 CC lib/vhost/vhost_rpc.o 00:03:02.152 CC lib/vhost/vhost_scsi.o 00:03:02.413 CC lib/vhost/vhost_blk.o 00:03:02.413 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.413 CC lib/iscsi/param.o 00:03:02.413 CC lib/iscsi/portal_grp.o 00:03:02.672 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.672 CC lib/iscsi/tgt_node.o 00:03:02.672 CC lib/iscsi/iscsi_subsystem.o 00:03:02.672 CC lib/vhost/rte_vhost_user.o 00:03:02.672 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:03.237 CC lib/iscsi/iscsi_rpc.o 00:03:03.237 CC lib/iscsi/task.o 00:03:03.237 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:03.237 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.237 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.237 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.237 CC lib/ftl/utils/ftl_conf.o 00:03:03.237 LIB libspdk_nvmf.a 00:03:03.237 CC lib/ftl/utils/ftl_md.o 00:03:03.237 SO libspdk_nvmf.so.20.0 00:03:03.496 CC lib/ftl/utils/ftl_mempool.o 00:03:03.496 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.496 CC lib/ftl/utils/ftl_property.o 00:03:03.496 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.496 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.496 LIB libspdk_iscsi.a 00:03:03.496 SYMLINK libspdk_nvmf.so 00:03:03.496 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.496 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.496 SO libspdk_iscsi.so.8.0 00:03:03.496 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.496 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.818 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.818 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.818 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.818 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.818 SYMLINK libspdk_iscsi.so 00:03:03.818 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.818 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:03.818 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:03.818 CC lib/ftl/base/ftl_base_dev.o 00:03:03.818 CC lib/ftl/base/ftl_base_bdev.o 00:03:03.818 LIB libspdk_vhost.a 00:03:03.818 CC lib/ftl/ftl_trace.o 00:03:03.818 SO libspdk_vhost.so.8.0 00:03:04.091 SYMLINK libspdk_vhost.so 00:03:04.091 LIB libspdk_ftl.a 00:03:04.349 SO libspdk_ftl.so.9.0 00:03:04.607 SYMLINK libspdk_ftl.so 00:03:04.865 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.124 CC module/accel/ioat/accel_ioat.o 00:03:05.124 CC module/keyring/file/keyring.o 00:03:05.124 CC module/keyring/linux/keyring.o 00:03:05.124 CC module/fsdev/aio/fsdev_aio.o 00:03:05.124 CC module/accel/error/accel_error.o 00:03:05.124 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.124 CC module/blob/bdev/blob_bdev.o 00:03:05.124 CC module/sock/posix/posix.o 00:03:05.124 CC module/accel/dsa/accel_dsa.o 00:03:05.124 LIB libspdk_env_dpdk_rpc.a 00:03:05.124 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.124 CC module/keyring/file/keyring_rpc.o 00:03:05.124 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.124 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:05.124 CC module/keyring/linux/keyring_rpc.o 00:03:05.124 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.124 CC module/accel/error/accel_error_rpc.o 00:03:05.382 LIB libspdk_scheduler_dynamic.a 00:03:05.382 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.382 LIB libspdk_blob_bdev.a 00:03:05.382 LIB libspdk_keyring_file.a 00:03:05.382 LIB libspdk_keyring_linux.a 00:03:05.382 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.382 SO libspdk_blob_bdev.so.12.0 00:03:05.382 SO libspdk_keyring_file.so.2.0 00:03:05.382 SO libspdk_keyring_linux.so.1.0 00:03:05.382 LIB libspdk_accel_ioat.a 00:03:05.382 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.382 CC module/fsdev/aio/linux_aio_mgr.o 00:03:05.382 LIB libspdk_accel_error.a 00:03:05.382 SYMLINK libspdk_keyring_file.so 00:03:05.382 SO libspdk_accel_ioat.so.6.0 00:03:05.382 SYMLINK libspdk_blob_bdev.so 00:03:05.382 SYMLINK libspdk_keyring_linux.so 00:03:05.382 SO libspdk_accel_error.so.2.0 00:03:05.382 LIB libspdk_accel_dsa.a 00:03:05.382 SYMLINK libspdk_accel_ioat.so 00:03:05.382 SO libspdk_accel_dsa.so.5.0 00:03:05.641 SYMLINK libspdk_accel_error.so 00:03:05.641 SYMLINK libspdk_accel_dsa.so 00:03:05.641 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.641 CC module/accel/iaa/accel_iaa.o 00:03:05.641 CC module/sock/uring/uring.o 00:03:05.641 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.641 LIB libspdk_fsdev_aio.a 00:03:05.641 SO libspdk_fsdev_aio.so.1.0 00:03:05.641 CC module/bdev/delay/vbdev_delay.o 00:03:05.641 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.641 LIB libspdk_sock_posix.a 00:03:05.641 CC module/bdev/error/vbdev_error.o 00:03:05.899 LIB libspdk_scheduler_gscheduler.a 00:03:05.899 CC module/bdev/gpt/gpt.o 00:03:05.899 SYMLINK libspdk_fsdev_aio.so 00:03:05.899 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.899 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.899 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.899 SO libspdk_sock_posix.so.6.0 00:03:05.899 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.899 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.899 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.899 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.899 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.899 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.899 SYMLINK libspdk_sock_posix.so 00:03:05.899 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.899 LIB libspdk_accel_iaa.a 00:03:05.899 SO libspdk_accel_iaa.so.3.0 00:03:06.157 LIB libspdk_bdev_error.a 00:03:06.157 LIB libspdk_blobfs_bdev.a 00:03:06.157 SYMLINK libspdk_accel_iaa.so 00:03:06.157 SO libspdk_bdev_error.so.6.0 00:03:06.157 SO libspdk_blobfs_bdev.so.6.0 00:03:06.157 LIB libspdk_bdev_delay.a 00:03:06.157 LIB libspdk_bdev_gpt.a 00:03:06.157 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.157 SO libspdk_bdev_delay.so.6.0 00:03:06.157 SO libspdk_bdev_gpt.so.6.0 00:03:06.157 CC module/bdev/malloc/bdev_malloc.o 00:03:06.157 SYMLINK libspdk_blobfs_bdev.so 00:03:06.157 SYMLINK libspdk_bdev_error.so 00:03:06.157 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.157 SYMLINK libspdk_bdev_gpt.so 00:03:06.157 SYMLINK libspdk_bdev_delay.so 00:03:06.157 CC module/bdev/null/bdev_null.o 00:03:06.157 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.157 CC module/bdev/nvme/bdev_nvme.o 00:03:06.157 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.415 LIB libspdk_sock_uring.a 00:03:06.415 CC module/bdev/raid/bdev_raid.o 00:03:06.415 SO libspdk_sock_uring.so.5.0 00:03:06.415 CC module/bdev/split/vbdev_split.o 00:03:06.415 SYMLINK libspdk_sock_uring.so 00:03:06.415 CC module/bdev/null/bdev_null_rpc.o 00:03:06.415 LIB libspdk_bdev_malloc.a 00:03:06.415 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.415 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.415 SO libspdk_bdev_malloc.so.6.0 00:03:06.415 CC module/bdev/uring/bdev_uring.o 00:03:06.415 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.673 LIB libspdk_bdev_lvol.a 00:03:06.673 SYMLINK libspdk_bdev_malloc.so 00:03:06.673 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.673 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.673 SO libspdk_bdev_lvol.so.6.0 00:03:06.673 LIB libspdk_bdev_null.a 00:03:06.673 SO libspdk_bdev_null.so.6.0 00:03:06.673 SYMLINK libspdk_bdev_lvol.so 00:03:06.673 LIB libspdk_bdev_passthru.a 00:03:06.673 SYMLINK libspdk_bdev_null.so 00:03:06.673 SO libspdk_bdev_passthru.so.6.0 00:03:06.673 LIB libspdk_bdev_split.a 00:03:06.673 SO libspdk_bdev_split.so.6.0 00:03:06.931 SYMLINK libspdk_bdev_passthru.so 00:03:06.931 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.931 LIB libspdk_bdev_zone_block.a 00:03:06.931 CC module/bdev/aio/bdev_aio.o 00:03:06.931 CC module/bdev/uring/bdev_uring_rpc.o 00:03:06.931 CC module/bdev/ftl/bdev_ftl.o 00:03:06.931 SYMLINK libspdk_bdev_split.so 00:03:06.931 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.931 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:06.931 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.931 SO libspdk_bdev_zone_block.so.6.0 00:03:06.931 SYMLINK libspdk_bdev_zone_block.so 00:03:06.931 CC module/bdev/raid/raid0.o 00:03:06.931 LIB libspdk_bdev_uring.a 00:03:06.931 CC module/bdev/raid/raid1.o 00:03:06.931 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:07.189 SO libspdk_bdev_uring.so.6.0 00:03:07.189 SYMLINK libspdk_bdev_uring.so 00:03:07.189 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:07.189 LIB libspdk_bdev_aio.a 00:03:07.189 CC module/bdev/raid/concat.o 00:03:07.189 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.189 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:07.189 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.189 SO libspdk_bdev_aio.so.6.0 00:03:07.189 LIB libspdk_bdev_iscsi.a 00:03:07.189 SO libspdk_bdev_iscsi.so.6.0 00:03:07.447 SYMLINK libspdk_bdev_aio.so 00:03:07.447 CC module/bdev/nvme/nvme_rpc.o 00:03:07.447 CC module/bdev/nvme/bdev_mdns_client.o 00:03:07.447 LIB libspdk_bdev_ftl.a 00:03:07.447 SYMLINK libspdk_bdev_iscsi.so 00:03:07.447 CC module/bdev/nvme/vbdev_opal.o 00:03:07.447 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:07.447 SO libspdk_bdev_ftl.so.6.0 00:03:07.447 LIB libspdk_bdev_raid.a 00:03:07.447 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:07.447 SYMLINK libspdk_bdev_ftl.so 00:03:07.447 LIB libspdk_bdev_virtio.a 00:03:07.447 SO libspdk_bdev_raid.so.6.0 00:03:07.447 SO libspdk_bdev_virtio.so.6.0 00:03:07.447 SYMLINK libspdk_bdev_raid.so 00:03:07.705 SYMLINK libspdk_bdev_virtio.so 00:03:08.641 LIB libspdk_bdev_nvme.a 00:03:08.641 SO libspdk_bdev_nvme.so.7.1 00:03:08.641 SYMLINK libspdk_bdev_nvme.so 00:03:09.208 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.208 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.208 CC module/event/subsystems/vmd/vmd.o 00:03:09.208 CC module/event/subsystems/sock/sock.o 00:03:09.208 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.208 CC module/event/subsystems/keyring/keyring.o 00:03:09.208 CC module/event/subsystems/fsdev/fsdev.o 00:03:09.208 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.208 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.466 LIB libspdk_event_keyring.a 00:03:09.466 LIB libspdk_event_scheduler.a 00:03:09.466 LIB libspdk_event_fsdev.a 00:03:09.466 LIB libspdk_event_vhost_blk.a 00:03:09.466 LIB libspdk_event_vmd.a 00:03:09.466 SO libspdk_event_scheduler.so.4.0 00:03:09.466 SO libspdk_event_vhost_blk.so.3.0 00:03:09.466 SO libspdk_event_keyring.so.1.0 00:03:09.466 SO libspdk_event_fsdev.so.1.0 00:03:09.466 LIB libspdk_event_sock.a 00:03:09.466 SO libspdk_event_vmd.so.6.0 00:03:09.466 LIB libspdk_event_iobuf.a 00:03:09.466 SO libspdk_event_sock.so.5.0 00:03:09.466 SYMLINK libspdk_event_scheduler.so 00:03:09.466 SYMLINK libspdk_event_vhost_blk.so 00:03:09.466 SYMLINK libspdk_event_keyring.so 00:03:09.466 SO libspdk_event_iobuf.so.3.0 00:03:09.466 SYMLINK libspdk_event_fsdev.so 00:03:09.466 SYMLINK libspdk_event_sock.so 00:03:09.466 SYMLINK libspdk_event_vmd.so 00:03:09.466 SYMLINK libspdk_event_iobuf.so 00:03:10.034 CC module/event/subsystems/accel/accel.o 00:03:10.034 LIB libspdk_event_accel.a 00:03:10.034 SO libspdk_event_accel.so.6.0 00:03:10.293 SYMLINK libspdk_event_accel.so 00:03:10.552 CC module/event/subsystems/bdev/bdev.o 00:03:10.811 LIB libspdk_event_bdev.a 00:03:10.811 SO libspdk_event_bdev.so.6.0 00:03:10.811 SYMLINK libspdk_event_bdev.so 00:03:11.070 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.070 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.070 CC module/event/subsystems/nbd/nbd.o 00:03:11.070 CC module/event/subsystems/scsi/scsi.o 00:03:11.070 CC module/event/subsystems/ublk/ublk.o 00:03:11.328 LIB libspdk_event_nbd.a 00:03:11.328 LIB libspdk_event_ublk.a 00:03:11.328 LIB libspdk_event_scsi.a 00:03:11.328 SO libspdk_event_nbd.so.6.0 00:03:11.328 SO libspdk_event_scsi.so.6.0 00:03:11.328 LIB libspdk_event_nvmf.a 00:03:11.328 SO libspdk_event_ublk.so.3.0 00:03:11.328 SO libspdk_event_nvmf.so.6.0 00:03:11.328 SYMLINK libspdk_event_nbd.so 00:03:11.328 SYMLINK libspdk_event_scsi.so 00:03:11.328 SYMLINK libspdk_event_ublk.so 00:03:11.328 SYMLINK libspdk_event_nvmf.so 00:03:11.894 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.894 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.894 LIB libspdk_event_iscsi.a 00:03:11.894 LIB libspdk_event_vhost_scsi.a 00:03:11.894 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.894 SO libspdk_event_iscsi.so.6.0 00:03:11.894 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.894 SYMLINK libspdk_event_iscsi.so 00:03:12.153 SO libspdk.so.6.0 00:03:12.153 SYMLINK libspdk.so 00:03:12.411 CC app/trace_record/trace_record.o 00:03:12.411 CC app/spdk_lspci/spdk_lspci.o 00:03:12.411 CXX app/trace/trace.o 00:03:12.411 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.671 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.671 CC app/nvmf_tgt/nvmf_main.o 00:03:12.671 CC app/spdk_tgt/spdk_tgt.o 00:03:12.671 CC examples/ioat/perf/perf.o 00:03:12.671 CC examples/util/zipf/zipf.o 00:03:12.671 CC test/thread/poller_perf/poller_perf.o 00:03:12.671 LINK spdk_lspci 00:03:12.671 LINK interrupt_tgt 00:03:12.671 LINK spdk_trace_record 00:03:12.671 LINK nvmf_tgt 00:03:12.671 LINK zipf 00:03:12.671 LINK iscsi_tgt 00:03:12.930 LINK spdk_tgt 00:03:12.930 LINK poller_perf 00:03:12.930 LINK ioat_perf 00:03:12.930 LINK spdk_trace 00:03:12.930 CC app/spdk_nvme_perf/perf.o 00:03:12.930 CC app/spdk_nvme_identify/identify.o 00:03:12.930 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.930 CC app/spdk_top/spdk_top.o 00:03:13.188 CC app/spdk_dd/spdk_dd.o 00:03:13.188 CC examples/ioat/verify/verify.o 00:03:13.188 CC app/fio/nvme/fio_plugin.o 00:03:13.188 TEST_HEADER include/spdk/accel.h 00:03:13.188 CC test/dma/test_dma/test_dma.o 00:03:13.188 TEST_HEADER include/spdk/accel_module.h 00:03:13.188 TEST_HEADER include/spdk/assert.h 00:03:13.188 TEST_HEADER include/spdk/barrier.h 00:03:13.188 TEST_HEADER include/spdk/base64.h 00:03:13.188 TEST_HEADER include/spdk/bdev.h 00:03:13.188 TEST_HEADER include/spdk/bdev_module.h 00:03:13.188 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.188 LINK spdk_nvme_discover 00:03:13.188 TEST_HEADER include/spdk/bit_array.h 00:03:13.188 TEST_HEADER include/spdk/bit_pool.h 00:03:13.188 CC test/app/bdev_svc/bdev_svc.o 00:03:13.188 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.188 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.188 TEST_HEADER include/spdk/blobfs.h 00:03:13.188 TEST_HEADER include/spdk/blob.h 00:03:13.188 TEST_HEADER include/spdk/conf.h 00:03:13.188 TEST_HEADER include/spdk/config.h 00:03:13.188 TEST_HEADER include/spdk/cpuset.h 00:03:13.188 TEST_HEADER include/spdk/crc16.h 00:03:13.188 TEST_HEADER include/spdk/crc32.h 00:03:13.188 TEST_HEADER include/spdk/crc64.h 00:03:13.188 TEST_HEADER include/spdk/dif.h 00:03:13.188 TEST_HEADER include/spdk/dma.h 00:03:13.188 TEST_HEADER include/spdk/endian.h 00:03:13.188 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.188 TEST_HEADER include/spdk/env.h 00:03:13.188 TEST_HEADER include/spdk/event.h 00:03:13.188 TEST_HEADER include/spdk/fd_group.h 00:03:13.188 TEST_HEADER include/spdk/fd.h 00:03:13.188 TEST_HEADER include/spdk/file.h 00:03:13.188 TEST_HEADER include/spdk/fsdev.h 00:03:13.188 TEST_HEADER include/spdk/fsdev_module.h 00:03:13.188 TEST_HEADER include/spdk/ftl.h 00:03:13.188 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:13.188 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.188 TEST_HEADER include/spdk/hexlify.h 00:03:13.188 TEST_HEADER include/spdk/histogram_data.h 00:03:13.188 TEST_HEADER include/spdk/idxd.h 00:03:13.188 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.188 TEST_HEADER include/spdk/init.h 00:03:13.188 TEST_HEADER include/spdk/ioat.h 00:03:13.188 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.188 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.448 TEST_HEADER include/spdk/json.h 00:03:13.448 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.448 TEST_HEADER include/spdk/keyring.h 00:03:13.448 TEST_HEADER include/spdk/keyring_module.h 00:03:13.448 TEST_HEADER include/spdk/likely.h 00:03:13.448 TEST_HEADER include/spdk/log.h 00:03:13.448 TEST_HEADER include/spdk/lvol.h 00:03:13.448 TEST_HEADER include/spdk/md5.h 00:03:13.448 TEST_HEADER include/spdk/memory.h 00:03:13.448 TEST_HEADER include/spdk/mmio.h 00:03:13.448 TEST_HEADER include/spdk/nbd.h 00:03:13.448 TEST_HEADER include/spdk/net.h 00:03:13.448 TEST_HEADER include/spdk/notify.h 00:03:13.448 TEST_HEADER include/spdk/nvme.h 00:03:13.448 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.448 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.448 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.448 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.448 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.448 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.448 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.448 TEST_HEADER include/spdk/nvmf.h 00:03:13.448 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.448 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.448 TEST_HEADER include/spdk/opal.h 00:03:13.448 TEST_HEADER include/spdk/opal_spec.h 00:03:13.448 TEST_HEADER include/spdk/pci_ids.h 00:03:13.448 TEST_HEADER include/spdk/pipe.h 00:03:13.448 TEST_HEADER include/spdk/queue.h 00:03:13.448 TEST_HEADER include/spdk/reduce.h 00:03:13.448 TEST_HEADER include/spdk/rpc.h 00:03:13.448 TEST_HEADER include/spdk/scheduler.h 00:03:13.448 TEST_HEADER include/spdk/scsi.h 00:03:13.448 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.448 TEST_HEADER include/spdk/sock.h 00:03:13.448 TEST_HEADER include/spdk/stdinc.h 00:03:13.448 LINK verify 00:03:13.448 TEST_HEADER include/spdk/string.h 00:03:13.448 TEST_HEADER include/spdk/thread.h 00:03:13.448 TEST_HEADER include/spdk/trace.h 00:03:13.448 TEST_HEADER include/spdk/trace_parser.h 00:03:13.448 TEST_HEADER include/spdk/tree.h 00:03:13.448 TEST_HEADER include/spdk/ublk.h 00:03:13.448 TEST_HEADER include/spdk/util.h 00:03:13.448 TEST_HEADER include/spdk/uuid.h 00:03:13.448 TEST_HEADER include/spdk/version.h 00:03:13.448 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.448 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.448 TEST_HEADER include/spdk/vhost.h 00:03:13.448 TEST_HEADER include/spdk/vmd.h 00:03:13.448 TEST_HEADER include/spdk/xor.h 00:03:13.448 TEST_HEADER include/spdk/zipf.h 00:03:13.448 CXX test/cpp_headers/accel.o 00:03:13.448 CXX test/cpp_headers/accel_module.o 00:03:13.448 LINK bdev_svc 00:03:13.708 LINK spdk_dd 00:03:13.708 CXX test/cpp_headers/assert.o 00:03:13.708 LINK spdk_nvme 00:03:13.708 CC examples/sock/hello_world/hello_sock.o 00:03:13.708 LINK test_dma 00:03:13.708 CC examples/thread/thread/thread_ex.o 00:03:13.708 CXX test/cpp_headers/barrier.o 00:03:13.708 LINK spdk_nvme_identify 00:03:13.966 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:13.966 LINK spdk_nvme_perf 00:03:13.966 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.966 LINK spdk_top 00:03:13.966 CC app/fio/bdev/fio_plugin.o 00:03:13.966 CXX test/cpp_headers/base64.o 00:03:13.966 LINK thread 00:03:13.967 LINK hello_sock 00:03:13.967 CC test/app/histogram_perf/histogram_perf.o 00:03:13.967 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.225 CXX test/cpp_headers/bdev.o 00:03:14.225 CC app/vhost/vhost.o 00:03:14.225 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.225 LINK histogram_perf 00:03:14.225 LINK nvme_fuzz 00:03:14.225 CC test/env/vtophys/vtophys.o 00:03:14.225 CXX test/cpp_headers/bdev_module.o 00:03:14.225 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.484 CC examples/vmd/lsvmd/lsvmd.o 00:03:14.484 LINK vhost 00:03:14.484 CXX test/cpp_headers/bdev_zone.o 00:03:14.484 LINK spdk_bdev 00:03:14.484 LINK vtophys 00:03:14.484 LINK lsvmd 00:03:14.484 CC examples/idxd/perf/perf.o 00:03:14.484 CC test/app/jsoncat/jsoncat.o 00:03:14.484 CXX test/cpp_headers/bit_array.o 00:03:14.484 LINK vhost_fuzz 00:03:14.742 CXX test/cpp_headers/bit_pool.o 00:03:14.742 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.742 CC test/env/memory/memory_ut.o 00:03:14.742 LINK jsoncat 00:03:14.742 CXX test/cpp_headers/blob_bdev.o 00:03:14.742 CC examples/vmd/led/led.o 00:03:14.742 LINK env_dpdk_post_init 00:03:15.001 LINK idxd_perf 00:03:15.001 LINK mem_callbacks 00:03:15.001 LINK led 00:03:15.001 CC test/app/stub/stub.o 00:03:15.001 CC examples/accel/perf/accel_perf.o 00:03:15.001 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:15.001 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.001 CXX test/cpp_headers/blobfs.o 00:03:15.001 CXX test/cpp_headers/blob.o 00:03:15.001 CXX test/cpp_headers/conf.o 00:03:15.258 CXX test/cpp_headers/config.o 00:03:15.258 LINK stub 00:03:15.259 CXX test/cpp_headers/cpuset.o 00:03:15.259 CXX test/cpp_headers/crc16.o 00:03:15.259 LINK hello_fsdev 00:03:15.259 CC test/env/pci/pci_ut.o 00:03:15.259 CXX test/cpp_headers/crc32.o 00:03:15.259 CC test/event/event_perf/event_perf.o 00:03:15.259 CC test/nvme/aer/aer.o 00:03:15.517 CC test/event/reactor/reactor.o 00:03:15.517 CC test/event/reactor_perf/reactor_perf.o 00:03:15.517 LINK accel_perf 00:03:15.517 LINK iscsi_fuzz 00:03:15.517 CXX test/cpp_headers/crc64.o 00:03:15.517 LINK event_perf 00:03:15.517 LINK reactor 00:03:15.517 LINK reactor_perf 00:03:15.776 LINK aer 00:03:15.776 CXX test/cpp_headers/dif.o 00:03:15.776 CC examples/blob/hello_world/hello_blob.o 00:03:15.776 LINK pci_ut 00:03:15.776 CC test/nvme/reset/reset.o 00:03:15.776 CC test/nvme/sgl/sgl.o 00:03:15.776 CC examples/blob/cli/blobcli.o 00:03:15.776 CXX test/cpp_headers/dma.o 00:03:15.776 CC test/event/app_repeat/app_repeat.o 00:03:15.776 CC test/rpc_client/rpc_client_test.o 00:03:15.776 LINK memory_ut 00:03:16.034 LINK hello_blob 00:03:16.034 CXX test/cpp_headers/endian.o 00:03:16.034 CC examples/nvme/hello_world/hello_world.o 00:03:16.034 LINK app_repeat 00:03:16.034 LINK reset 00:03:16.034 LINK rpc_client_test 00:03:16.034 LINK sgl 00:03:16.034 CXX test/cpp_headers/env_dpdk.o 00:03:16.034 CXX test/cpp_headers/env.o 00:03:16.292 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.292 CC test/event/scheduler/scheduler.o 00:03:16.292 LINK hello_world 00:03:16.292 CC test/nvme/e2edp/nvme_dp.o 00:03:16.292 LINK blobcli 00:03:16.292 CC examples/nvme/reconnect/reconnect.o 00:03:16.292 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.292 CXX test/cpp_headers/event.o 00:03:16.292 CC examples/nvme/arbitration/arbitration.o 00:03:16.292 CXX test/cpp_headers/fd_group.o 00:03:16.292 LINK hello_bdev 00:03:16.550 LINK scheduler 00:03:16.550 CC test/accel/dif/dif.o 00:03:16.550 LINK nvme_dp 00:03:16.550 CXX test/cpp_headers/fd.o 00:03:16.550 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.550 CC examples/nvme/hotplug/hotplug.o 00:03:16.550 LINK reconnect 00:03:16.809 LINK arbitration 00:03:16.809 CXX test/cpp_headers/file.o 00:03:16.809 LINK nvme_manage 00:03:16.809 LINK cmb_copy 00:03:16.809 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.809 CC test/nvme/overhead/overhead.o 00:03:16.809 LINK hotplug 00:03:16.809 CC test/nvme/err_injection/err_injection.o 00:03:16.809 CC test/blobfs/mkfs/mkfs.o 00:03:16.809 CXX test/cpp_headers/fsdev.o 00:03:17.067 CC test/nvme/startup/startup.o 00:03:17.067 CXX test/cpp_headers/fsdev_module.o 00:03:17.067 CC examples/nvme/abort/abort.o 00:03:17.067 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:17.067 LINK err_injection 00:03:17.067 LINK mkfs 00:03:17.067 LINK overhead 00:03:17.067 LINK dif 00:03:17.067 LINK startup 00:03:17.067 CXX test/cpp_headers/ftl.o 00:03:17.324 LINK pmr_persistence 00:03:17.324 CC test/lvol/esnap/esnap.o 00:03:17.324 CC test/nvme/reserve/reserve.o 00:03:17.324 CXX test/cpp_headers/fuse_dispatcher.o 00:03:17.324 CC test/nvme/simple_copy/simple_copy.o 00:03:17.324 LINK abort 00:03:17.324 CC test/nvme/connect_stress/connect_stress.o 00:03:17.324 CC test/nvme/boot_partition/boot_partition.o 00:03:17.608 CC test/nvme/compliance/nvme_compliance.o 00:03:17.608 CXX test/cpp_headers/gpt_spec.o 00:03:17.608 LINK reserve 00:03:17.608 LINK bdevperf 00:03:17.608 CXX test/cpp_headers/hexlify.o 00:03:17.608 CC test/bdev/bdevio/bdevio.o 00:03:17.608 LINK simple_copy 00:03:17.608 LINK connect_stress 00:03:17.608 LINK boot_partition 00:03:17.916 CXX test/cpp_headers/histogram_data.o 00:03:17.916 CC test/nvme/fused_ordering/fused_ordering.o 00:03:17.916 LINK nvme_compliance 00:03:17.916 CXX test/cpp_headers/idxd.o 00:03:17.916 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:17.916 CC test/nvme/fdp/fdp.o 00:03:17.916 CC test/nvme/cuse/cuse.o 00:03:17.916 CXX test/cpp_headers/idxd_spec.o 00:03:17.916 LINK bdevio 00:03:17.916 CXX test/cpp_headers/init.o 00:03:17.916 CC examples/nvmf/nvmf/nvmf.o 00:03:17.916 LINK fused_ordering 00:03:17.916 CXX test/cpp_headers/ioat.o 00:03:17.916 LINK doorbell_aers 00:03:18.174 CXX test/cpp_headers/ioat_spec.o 00:03:18.174 CXX test/cpp_headers/iscsi_spec.o 00:03:18.174 CXX test/cpp_headers/json.o 00:03:18.174 CXX test/cpp_headers/jsonrpc.o 00:03:18.174 CXX test/cpp_headers/keyring.o 00:03:18.174 LINK fdp 00:03:18.174 CXX test/cpp_headers/keyring_module.o 00:03:18.174 CXX test/cpp_headers/likely.o 00:03:18.432 CXX test/cpp_headers/log.o 00:03:18.432 CXX test/cpp_headers/lvol.o 00:03:18.432 LINK nvmf 00:03:18.432 CXX test/cpp_headers/md5.o 00:03:18.432 CXX test/cpp_headers/memory.o 00:03:18.432 CXX test/cpp_headers/mmio.o 00:03:18.432 CXX test/cpp_headers/nbd.o 00:03:18.432 CXX test/cpp_headers/net.o 00:03:18.432 CXX test/cpp_headers/notify.o 00:03:18.432 CXX test/cpp_headers/nvme.o 00:03:18.432 CXX test/cpp_headers/nvme_intel.o 00:03:18.432 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.432 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.432 CXX test/cpp_headers/nvme_spec.o 00:03:18.691 CXX test/cpp_headers/nvme_zns.o 00:03:18.691 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.691 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.691 CXX test/cpp_headers/nvmf.o 00:03:18.691 CXX test/cpp_headers/nvmf_spec.o 00:03:18.691 CXX test/cpp_headers/nvmf_transport.o 00:03:18.691 CXX test/cpp_headers/opal.o 00:03:18.691 CXX test/cpp_headers/opal_spec.o 00:03:18.691 CXX test/cpp_headers/pci_ids.o 00:03:18.691 CXX test/cpp_headers/pipe.o 00:03:18.691 CXX test/cpp_headers/queue.o 00:03:18.691 CXX test/cpp_headers/reduce.o 00:03:18.951 CXX test/cpp_headers/rpc.o 00:03:18.951 CXX test/cpp_headers/scheduler.o 00:03:18.951 CXX test/cpp_headers/scsi.o 00:03:18.951 CXX test/cpp_headers/scsi_spec.o 00:03:18.951 CXX test/cpp_headers/sock.o 00:03:18.951 CXX test/cpp_headers/stdinc.o 00:03:18.951 CXX test/cpp_headers/string.o 00:03:18.951 CXX test/cpp_headers/thread.o 00:03:18.951 CXX test/cpp_headers/trace.o 00:03:18.951 CXX test/cpp_headers/trace_parser.o 00:03:18.951 CXX test/cpp_headers/tree.o 00:03:18.951 CXX test/cpp_headers/ublk.o 00:03:18.951 CXX test/cpp_headers/util.o 00:03:18.951 CXX test/cpp_headers/uuid.o 00:03:18.951 CXX test/cpp_headers/version.o 00:03:19.210 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.210 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.210 CXX test/cpp_headers/vhost.o 00:03:19.210 CXX test/cpp_headers/xor.o 00:03:19.210 CXX test/cpp_headers/vmd.o 00:03:19.210 LINK cuse 00:03:19.210 CXX test/cpp_headers/zipf.o 00:03:22.498 LINK esnap 00:03:22.498 00:03:22.498 real 1m25.339s 00:03:22.498 user 7m19.700s 00:03:22.498 sys 1m36.268s 00:03:22.498 10:46:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:22.498 10:46:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.498 ************************************ 00:03:22.498 END TEST make 00:03:22.498 ************************************ 00:03:22.498 10:46:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.498 10:46:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.498 10:46:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.498 10:46:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.498 10:46:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.498 10:46:15 -- pm/common@44 -- $ pid=5475 00:03:22.498 10:46:15 -- pm/common@50 -- $ kill -TERM 5475 00:03:22.498 10:46:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.498 10:46:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.498 10:46:15 -- pm/common@44 -- $ pid=5477 00:03:22.498 10:46:15 -- pm/common@50 -- $ kill -TERM 5477 00:03:22.498 10:46:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:22.498 10:46:15 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:22.498 10:46:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:22.498 10:46:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:22.498 10:46:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:22.758 10:46:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:22.758 10:46:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.758 10:46:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.758 10:46:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.758 10:46:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.758 10:46:15 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.758 10:46:15 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.758 10:46:15 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.758 10:46:15 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.758 10:46:15 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.758 10:46:15 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.758 10:46:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.758 10:46:15 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.758 10:46:15 -- scripts/common.sh@345 -- # : 1 00:03:22.758 10:46:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.758 10:46:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.758 10:46:15 -- scripts/common.sh@365 -- # decimal 1 00:03:22.758 10:46:15 -- scripts/common.sh@353 -- # local d=1 00:03:22.758 10:46:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.758 10:46:15 -- scripts/common.sh@355 -- # echo 1 00:03:22.758 10:46:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.758 10:46:15 -- scripts/common.sh@366 -- # decimal 2 00:03:22.758 10:46:15 -- scripts/common.sh@353 -- # local d=2 00:03:22.758 10:46:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.758 10:46:15 -- scripts/common.sh@355 -- # echo 2 00:03:22.758 10:46:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.758 10:46:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.758 10:46:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.758 10:46:15 -- scripts/common.sh@368 -- # return 0 00:03:22.758 10:46:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.758 10:46:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.758 --rc genhtml_branch_coverage=1 00:03:22.758 --rc genhtml_function_coverage=1 00:03:22.758 --rc genhtml_legend=1 00:03:22.758 --rc geninfo_all_blocks=1 00:03:22.758 --rc geninfo_unexecuted_blocks=1 00:03:22.758 00:03:22.758 ' 00:03:22.758 10:46:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.758 --rc genhtml_branch_coverage=1 00:03:22.758 --rc genhtml_function_coverage=1 00:03:22.758 --rc genhtml_legend=1 00:03:22.758 --rc geninfo_all_blocks=1 00:03:22.758 --rc geninfo_unexecuted_blocks=1 00:03:22.758 00:03:22.758 ' 00:03:22.758 10:46:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.758 --rc genhtml_branch_coverage=1 00:03:22.758 --rc genhtml_function_coverage=1 00:03:22.758 --rc genhtml_legend=1 00:03:22.758 --rc geninfo_all_blocks=1 00:03:22.758 --rc geninfo_unexecuted_blocks=1 00:03:22.758 00:03:22.758 ' 00:03:22.758 10:46:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.758 --rc genhtml_branch_coverage=1 00:03:22.758 --rc genhtml_function_coverage=1 00:03:22.758 --rc genhtml_legend=1 00:03:22.758 --rc geninfo_all_blocks=1 00:03:22.758 --rc geninfo_unexecuted_blocks=1 00:03:22.758 00:03:22.758 ' 00:03:22.758 10:46:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.758 10:46:15 -- nvmf/common.sh@7 -- # uname -s 00:03:22.758 10:46:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.758 10:46:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.758 10:46:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.758 10:46:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.758 10:46:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.758 10:46:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.758 10:46:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.758 10:46:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.758 10:46:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.758 10:46:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.758 10:46:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:03:22.758 10:46:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:03:22.758 10:46:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.758 10:46:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.758 10:46:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:22.758 10:46:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.758 10:46:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.758 10:46:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.758 10:46:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.758 10:46:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.758 10:46:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.758 10:46:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.758 10:46:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.758 10:46:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.758 10:46:15 -- paths/export.sh@5 -- # export PATH 00:03:22.758 10:46:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.758 10:46:15 -- nvmf/common.sh@51 -- # : 0 00:03:22.758 10:46:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.758 10:46:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.758 10:46:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.758 10:46:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.758 10:46:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.758 10:46:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.758 10:46:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.758 10:46:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.758 10:46:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.758 10:46:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.758 10:46:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.758 10:46:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.758 10:46:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.758 10:46:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.758 10:46:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.758 10:46:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.758 10:46:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.758 10:46:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.758 10:46:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.758 10:46:15 -- spdk/autotest.sh@48 -- # udevadm_pid=54566 00:03:22.758 10:46:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.758 10:46:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.758 10:46:15 -- pm/common@17 -- # local monitor 00:03:22.759 10:46:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.759 10:46:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.759 10:46:15 -- pm/common@25 -- # sleep 1 00:03:22.759 10:46:15 -- pm/common@21 -- # date +%s 00:03:22.759 10:46:15 -- pm/common@21 -- # date +%s 00:03:22.759 10:46:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733741175 00:03:22.759 10:46:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733741175 00:03:22.759 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733741175_collect-cpu-load.pm.log 00:03:22.759 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733741175_collect-vmstat.pm.log 00:03:23.696 10:46:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.696 10:46:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.696 10:46:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.696 10:46:16 -- common/autotest_common.sh@10 -- # set +x 00:03:23.955 10:46:16 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.955 10:46:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:23.955 10:46:16 -- common/autotest_common.sh@10 -- # set +x 00:03:23.955 10:46:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:23.955 10:46:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:23.955 10:46:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:23.955 10:46:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.955 10:46:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:23.955 10:46:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.955 10:46:16 -- common/autotest_common.sh@1457 -- # uname 00:03:23.955 10:46:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:23.955 10:46:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.955 10:46:16 -- common/autotest_common.sh@1477 -- # uname 00:03:23.955 10:46:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:23.955 10:46:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.955 10:46:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.955 lcov: LCOV version 1.15 00:03:23.955 10:46:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:42.097 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.097 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:00.186 10:46:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:00.186 10:46:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.186 10:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:00.186 10:46:50 -- spdk/autotest.sh@78 -- # rm -f 00:04:00.186 10:46:50 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.186 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:00.186 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:00.186 10:46:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.186 10:46:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:00.186 10:46:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:00.186 10:46:51 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:00.186 10:46:51 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:00.186 10:46:51 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:00.186 10:46:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.186 10:46:51 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:00.186 10:46:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.186 10:46:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:00.186 10:46:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:00.186 10:46:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.186 10:46:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.186 10:46:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.186 10:46:51 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:00.186 10:46:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.186 10:46:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:00.186 10:46:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:00.186 10:46:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:00.186 10:46:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.186 10:46:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.186 10:46:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:00.186 10:46:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:00.186 10:46:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:00.186 10:46:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.186 10:46:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.186 10:46:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:00.186 10:46:51 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:00.187 10:46:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:00.187 10:46:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.187 10:46:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.187 10:46:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.187 10:46:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.187 10:46:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.187 10:46:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.187 10:46:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.187 No valid GPT data, bailing 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # pt= 00:04:00.187 10:46:51 -- scripts/common.sh@395 -- # return 1 00:04:00.187 10:46:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.187 1+0 records in 00:04:00.187 1+0 records out 00:04:00.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00697419 s, 150 MB/s 00:04:00.187 10:46:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.187 10:46:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.187 10:46:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:00.187 10:46:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:00.187 10:46:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.187 No valid GPT data, bailing 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # pt= 00:04:00.187 10:46:51 -- scripts/common.sh@395 -- # return 1 00:04:00.187 10:46:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.187 1+0 records in 00:04:00.187 1+0 records out 00:04:00.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628864 s, 167 MB/s 00:04:00.187 10:46:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.187 10:46:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.187 10:46:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:00.187 10:46:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:00.187 10:46:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:00.187 No valid GPT data, bailing 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # pt= 00:04:00.187 10:46:51 -- scripts/common.sh@395 -- # return 1 00:04:00.187 10:46:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:00.187 1+0 records in 00:04:00.187 1+0 records out 00:04:00.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650397 s, 161 MB/s 00:04:00.187 10:46:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.187 10:46:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.187 10:46:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:00.187 10:46:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:00.187 10:46:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:00.187 No valid GPT data, bailing 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:00.187 10:46:51 -- scripts/common.sh@394 -- # pt= 00:04:00.187 10:46:51 -- scripts/common.sh@395 -- # return 1 00:04:00.187 10:46:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:00.187 1+0 records in 00:04:00.187 1+0 records out 00:04:00.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00575935 s, 182 MB/s 00:04:00.187 10:46:51 -- spdk/autotest.sh@105 -- # sync 00:04:00.187 10:46:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.187 10:46:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.187 10:46:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.121 10:46:54 -- spdk/autotest.sh@111 -- # uname -s 00:04:01.121 10:46:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:01.121 10:46:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:01.121 10:46:54 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:02.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.061 Hugepages 00:04:02.061 node hugesize free / total 00:04:02.061 node0 1048576kB 0 / 0 00:04:02.061 node0 2048kB 0 / 0 00:04:02.061 00:04:02.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:02.061 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:02.324 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:02.325 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:02.325 10:46:55 -- spdk/autotest.sh@117 -- # uname -s 00:04:02.325 10:46:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:02.325 10:46:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:02.325 10:46:55 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.276 10:46:56 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:04.651 10:46:57 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:04.651 10:46:57 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:04.651 10:46:57 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:04.651 10:46:57 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:04.651 10:46:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:04.651 10:46:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:04.651 10:46:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:04.651 10:46:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:04.651 10:46:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:04.651 10:46:57 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:04.651 10:46:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.652 10:46:57 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.910 Waiting for block devices as requested 00:04:04.910 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.169 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:05.169 10:46:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:05.169 10:46:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:05.169 10:46:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:05.169 10:46:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:05.169 10:46:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:05.169 10:46:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1543 -- # continue 00:04:05.169 10:46:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:05.169 10:46:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:05.169 10:46:58 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:05.169 10:46:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:05.169 10:46:58 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:05.169 10:46:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:05.169 10:46:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:05.169 10:46:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:05.169 10:46:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:05.169 10:46:58 -- common/autotest_common.sh@1543 -- # continue 00:04:05.169 10:46:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:05.169 10:46:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:05.169 10:46:58 -- common/autotest_common.sh@10 -- # set +x 00:04:05.169 10:46:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:05.169 10:46:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.169 10:46:58 -- common/autotest_common.sh@10 -- # set +x 00:04:05.169 10:46:58 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.106 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.106 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.365 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.365 10:46:59 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.365 10:46:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.365 10:46:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.365 10:46:59 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.365 10:46:59 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:06.365 10:46:59 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.365 10:46:59 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:06.365 10:46:59 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:06.365 10:46:59 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:06.365 10:46:59 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.365 10:46:59 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:06.365 10:46:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.365 10:46:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.365 10:46:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.365 10:46:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:06.365 10:46:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:06.365 10:46:59 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:06.365 10:46:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:06.365 10:46:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:06.365 10:46:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:06.365 10:46:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:06.365 10:46:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.365 10:46:59 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:06.365 10:46:59 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:06.365 10:46:59 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:06.365 10:46:59 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:06.365 10:46:59 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:06.365 10:46:59 -- common/autotest_common.sh@1572 -- # return 0 00:04:06.365 10:46:59 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:06.365 10:46:59 -- common/autotest_common.sh@1580 -- # return 0 00:04:06.365 10:46:59 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:06.365 10:46:59 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:06.365 10:46:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.365 10:46:59 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:06.365 10:46:59 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:06.365 10:46:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.365 10:46:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.365 10:46:59 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:06.365 10:46:59 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:06.365 10:46:59 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:06.365 10:46:59 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.365 10:46:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.365 10:46:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.365 10:46:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.365 ************************************ 00:04:06.365 START TEST env 00:04:06.365 ************************************ 00:04:06.365 10:46:59 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:06.625 * Looking for test storage... 00:04:06.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:06.625 10:46:59 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.625 10:46:59 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.625 10:46:59 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.625 10:46:59 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.625 10:46:59 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.625 10:46:59 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.625 10:46:59 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.625 10:46:59 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.625 10:46:59 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.625 10:46:59 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.625 10:46:59 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.625 10:46:59 env -- scripts/common.sh@344 -- # case "$op" in 00:04:06.625 10:46:59 env -- scripts/common.sh@345 -- # : 1 00:04:06.625 10:46:59 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.625 10:46:59 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.625 10:46:59 env -- scripts/common.sh@365 -- # decimal 1 00:04:06.625 10:46:59 env -- scripts/common.sh@353 -- # local d=1 00:04:06.625 10:46:59 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.625 10:46:59 env -- scripts/common.sh@355 -- # echo 1 00:04:06.625 10:46:59 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.625 10:46:59 env -- scripts/common.sh@366 -- # decimal 2 00:04:06.625 10:46:59 env -- scripts/common.sh@353 -- # local d=2 00:04:06.625 10:46:59 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.625 10:46:59 env -- scripts/common.sh@355 -- # echo 2 00:04:06.625 10:46:59 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.625 10:46:59 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.625 10:46:59 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.625 10:46:59 env -- scripts/common.sh@368 -- # return 0 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.625 --rc genhtml_branch_coverage=1 00:04:06.625 --rc genhtml_function_coverage=1 00:04:06.625 --rc genhtml_legend=1 00:04:06.625 --rc geninfo_all_blocks=1 00:04:06.625 --rc geninfo_unexecuted_blocks=1 00:04:06.625 00:04:06.625 ' 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.625 --rc genhtml_branch_coverage=1 00:04:06.625 --rc genhtml_function_coverage=1 00:04:06.625 --rc genhtml_legend=1 00:04:06.625 --rc geninfo_all_blocks=1 00:04:06.625 --rc geninfo_unexecuted_blocks=1 00:04:06.625 00:04:06.625 ' 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.625 --rc genhtml_branch_coverage=1 00:04:06.625 --rc genhtml_function_coverage=1 00:04:06.625 --rc genhtml_legend=1 00:04:06.625 --rc geninfo_all_blocks=1 00:04:06.625 --rc geninfo_unexecuted_blocks=1 00:04:06.625 00:04:06.625 ' 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:06.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.625 --rc genhtml_branch_coverage=1 00:04:06.625 --rc genhtml_function_coverage=1 00:04:06.625 --rc genhtml_legend=1 00:04:06.625 --rc geninfo_all_blocks=1 00:04:06.625 --rc geninfo_unexecuted_blocks=1 00:04:06.625 00:04:06.625 ' 00:04:06.625 10:46:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.625 10:46:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.625 10:46:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.625 ************************************ 00:04:06.625 START TEST env_memory 00:04:06.625 ************************************ 00:04:06.625 10:46:59 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:06.625 00:04:06.625 00:04:06.625 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.625 http://cunit.sourceforge.net/ 00:04:06.625 00:04:06.625 00:04:06.625 Suite: mem_map_2mb 00:04:06.884 Test: alloc and free memory map ...[2024-12-09 10:46:59.817832] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.884 passed 00:04:06.884 Test: mem map translation ...[2024-12-09 10:46:59.842649] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.884 [2024-12-09 10:46:59.842718] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.884 [2024-12-09 10:46:59.842782] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.884 [2024-12-09 10:46:59.842790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.884 passed 00:04:06.884 Test: mem map registration ...[2024-12-09 10:46:59.893719] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:06.884 [2024-12-09 10:46:59.893790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:06.884 passed 00:04:06.884 Test: mem map adjacent registrations ...passed 00:04:06.884 Suite: mem_map_4kb 00:04:06.884 Test: alloc and free memory map ...[2024-12-09 10:47:00.032376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 310:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.884 passed 00:04:06.884 Test: mem map translation ...[2024-12-09 10:47:00.060760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=4096 len=1234 00:04:06.884 [2024-12-09 10:47:00.060816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 628:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=4096 00:04:07.143 [2024-12-09 10:47:00.082964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 622:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.143 [2024-12-09 10:47:00.082999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 638:spdk_mem_map_set_translation: *ERROR*: could not get 0xfffffffff000 map 00:04:07.143 passed 00:04:07.143 Test: mem map registration ...[2024-12-09 10:47:00.175582] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=1000 len=1234 00:04:07.143 [2024-12-09 10:47:00.175632] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 380:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=4096 00:04:07.143 passed 00:04:07.143 Test: mem map adjacent registrations ...passed 00:04:07.143 00:04:07.144 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.144 suites 2 2 n/a 0 0 00:04:07.144 tests 8 8 8 0 0 00:04:07.144 asserts 304 304 304 0 n/a 00:04:07.144 00:04:07.144 Elapsed time = 0.498 seconds 00:04:07.144 00:04:07.144 real 0m0.522s 00:04:07.144 user 0m0.492s 00:04:07.144 sys 0m0.024s 00:04:07.144 10:47:00 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.144 10:47:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.144 ************************************ 00:04:07.144 END TEST env_memory 00:04:07.144 ************************************ 00:04:07.404 10:47:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.404 10:47:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.404 10:47:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.404 10:47:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.404 ************************************ 00:04:07.404 START TEST env_vtophys 00:04:07.404 ************************************ 00:04:07.404 10:47:00 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:07.404 EAL: lib.eal log level changed from notice to debug 00:04:07.404 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 1 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 2 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 3 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 4 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 5 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 6 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 7 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 8 as core 0 on socket 0 00:04:07.404 EAL: Detected lcore 9 as core 0 on socket 0 00:04:07.404 EAL: Maximum logical cores by configuration: 128 00:04:07.404 EAL: Detected CPU lcores: 10 00:04:07.404 EAL: Detected NUMA nodes: 1 00:04:07.404 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.404 EAL: Detected shared linkage of DPDK 00:04:07.404 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.404 EAL: Selected IOVA mode 'PA' 00:04:07.404 EAL: Probing VFIO support... 00:04:07.404 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.404 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:07.404 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.404 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.404 EAL: Setting up physically contiguous memory... 00:04:07.404 EAL: Setting maximum number of open files to 524288 00:04:07.404 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.404 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.404 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.404 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.404 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.404 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.404 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.404 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.404 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.404 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.404 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.404 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.404 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.404 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.404 EAL: Hugepages will be freed exactly as allocated. 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: TSC frequency is ~2290000 KHz 00:04:07.404 EAL: Main lcore 0 is ready (tid=7fa2095ffa00;cpuset=[0]) 00:04:07.404 EAL: Trying to obtain current memory policy. 00:04:07.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.404 EAL: Restoring previous memory policy: 0 00:04:07.404 EAL: request: mp_malloc_sync 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.404 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:07.404 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.404 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.404 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:07.404 00:04:07.404 00:04:07.404 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.404 http://cunit.sourceforge.net/ 00:04:07.404 00:04:07.404 00:04:07.404 Suite: components_suite 00:04:07.404 Test: vtophys_malloc_test ...passed 00:04:07.404 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.404 EAL: Restoring previous memory policy: 4 00:04:07.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.404 EAL: request: mp_malloc_sync 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.404 EAL: request: mp_malloc_sync 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.404 EAL: Trying to obtain current memory policy. 00:04:07.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.404 EAL: Restoring previous memory policy: 4 00:04:07.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.404 EAL: request: mp_malloc_sync 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.404 EAL: request: mp_malloc_sync 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.404 EAL: Trying to obtain current memory policy. 00:04:07.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.404 EAL: Restoring previous memory policy: 4 00:04:07.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.404 EAL: request: mp_malloc_sync 00:04:07.404 EAL: No shared files mode enabled, IPC is disabled 00:04:07.404 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.405 EAL: request: mp_malloc_sync 00:04:07.405 EAL: No shared files mode enabled, IPC is disabled 00:04:07.405 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.405 EAL: Trying to obtain current memory policy. 00:04:07.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.405 EAL: Restoring previous memory policy: 4 00:04:07.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.405 EAL: request: mp_malloc_sync 00:04:07.405 EAL: No shared files mode enabled, IPC is disabled 00:04:07.405 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.405 EAL: request: mp_malloc_sync 00:04:07.405 EAL: No shared files mode enabled, IPC is disabled 00:04:07.405 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.405 EAL: Trying to obtain current memory policy. 00:04:07.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.405 EAL: Restoring previous memory policy: 4 00:04:07.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.405 EAL: request: mp_malloc_sync 00:04:07.405 EAL: No shared files mode enabled, IPC is disabled 00:04:07.405 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.405 EAL: request: mp_malloc_sync 00:04:07.405 EAL: No shared files mode enabled, IPC is disabled 00:04:07.405 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.405 EAL: Trying to obtain current memory policy. 00:04:07.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.405 EAL: Restoring previous memory policy: 4 00:04:07.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.405 EAL: request: mp_malloc_sync 00:04:07.405 EAL: No shared files mode enabled, IPC is disabled 00:04:07.405 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.664 EAL: request: mp_malloc_sync 00:04:07.664 EAL: No shared files mode enabled, IPC is disabled 00:04:07.664 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.664 EAL: Trying to obtain current memory policy. 00:04:07.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.664 EAL: Restoring previous memory policy: 4 00:04:07.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.664 EAL: request: mp_malloc_sync 00:04:07.664 EAL: No shared files mode enabled, IPC is disabled 00:04:07.664 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.664 EAL: request: mp_malloc_sync 00:04:07.664 EAL: No shared files mode enabled, IPC is disabled 00:04:07.664 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.664 EAL: Trying to obtain current memory policy. 00:04:07.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.664 EAL: Restoring previous memory policy: 4 00:04:07.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.664 EAL: request: mp_malloc_sync 00:04:07.664 EAL: No shared files mode enabled, IPC is disabled 00:04:07.664 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.664 EAL: request: mp_malloc_sync 00:04:07.664 EAL: No shared files mode enabled, IPC is disabled 00:04:07.664 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.664 EAL: Trying to obtain current memory policy. 00:04:07.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.923 EAL: Restoring previous memory policy: 4 00:04:07.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.923 EAL: request: mp_malloc_sync 00:04:07.923 EAL: No shared files mode enabled, IPC is disabled 00:04:07.923 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.923 EAL: request: mp_malloc_sync 00:04:07.923 EAL: No shared files mode enabled, IPC is disabled 00:04:07.923 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.923 EAL: Trying to obtain current memory policy. 00:04:07.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.182 EAL: Restoring previous memory policy: 4 00:04:08.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.182 EAL: request: mp_malloc_sync 00:04:08.182 EAL: No shared files mode enabled, IPC is disabled 00:04:08.182 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.446 passed 00:04:08.446 00:04:08.446 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.446 suites 1 1 n/a 0 0 00:04:08.446 tests 2 2 2 0 0 00:04:08.446 asserts 5575 5575 5575 0 n/a 00:04:08.446 00:04:08.446 Elapsed time = 1.002 seconds 00:04:08.446 EAL: request: mp_malloc_sync 00:04:08.446 EAL: No shared files mode enabled, IPC is disabled 00:04:08.446 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.446 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.446 EAL: request: mp_malloc_sync 00:04:08.446 EAL: No shared files mode enabled, IPC is disabled 00:04:08.446 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.446 EAL: No shared files mode enabled, IPC is disabled 00:04:08.446 EAL: No shared files mode enabled, IPC is disabled 00:04:08.446 EAL: No shared files mode enabled, IPC is disabled 00:04:08.446 00:04:08.446 real 0m1.220s 00:04:08.446 user 0m0.656s 00:04:08.446 sys 0m0.437s 00:04:08.446 10:47:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.446 10:47:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.446 ************************************ 00:04:08.446 END TEST env_vtophys 00:04:08.446 ************************************ 00:04:08.446 10:47:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:08.446 10:47:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.446 10:47:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.712 10:47:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.712 ************************************ 00:04:08.712 START TEST env_pci 00:04:08.712 ************************************ 00:04:08.712 10:47:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:08.712 00:04:08.712 00:04:08.712 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.712 http://cunit.sourceforge.net/ 00:04:08.712 00:04:08.712 00:04:08.712 Suite: pci 00:04:08.712 Test: pci_hook ...[2024-12-09 10:47:01.655098] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56828 has claimed it 00:04:08.712 passed 00:04:08.712 00:04:08.712 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.712 suites 1 1 n/a 0 0 00:04:08.712 tests 1 1 1 0 0 00:04:08.712 asserts 25 25 25 0 n/a 00:04:08.712 00:04:08.712 Elapsed time = 0.002 seconds 00:04:08.712 EAL: Cannot find device (10000:00:01.0) 00:04:08.712 EAL: Failed to attach device on primary process 00:04:08.712 00:04:08.712 real 0m0.030s 00:04:08.712 user 0m0.013s 00:04:08.712 sys 0m0.017s 00:04:08.712 10:47:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.712 10:47:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.712 ************************************ 00:04:08.712 END TEST env_pci 00:04:08.712 ************************************ 00:04:08.712 10:47:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.712 10:47:01 env -- env/env.sh@15 -- # uname 00:04:08.712 10:47:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.712 10:47:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.712 10:47:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.712 10:47:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:08.712 10:47:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.712 10:47:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.712 ************************************ 00:04:08.712 START TEST env_dpdk_post_init 00:04:08.712 ************************************ 00:04:08.712 10:47:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.712 EAL: Detected CPU lcores: 10 00:04:08.712 EAL: Detected NUMA nodes: 1 00:04:08.712 EAL: Detected shared linkage of DPDK 00:04:08.712 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.712 EAL: Selected IOVA mode 'PA' 00:04:08.712 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.971 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:08.971 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:08.971 Starting DPDK initialization... 00:04:08.971 Starting SPDK post initialization... 00:04:08.971 SPDK NVMe probe 00:04:08.971 Attaching to 0000:00:10.0 00:04:08.971 Attaching to 0000:00:11.0 00:04:08.971 Attached to 0000:00:10.0 00:04:08.971 Attached to 0000:00:11.0 00:04:08.971 Cleaning up... 00:04:08.971 00:04:08.971 real 0m0.204s 00:04:08.971 user 0m0.068s 00:04:08.971 sys 0m0.036s 00:04:08.971 10:47:01 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.971 10:47:01 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.971 ************************************ 00:04:08.971 END TEST env_dpdk_post_init 00:04:08.971 ************************************ 00:04:08.971 10:47:01 env -- env/env.sh@26 -- # uname 00:04:08.971 10:47:01 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:08.971 10:47:01 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.971 10:47:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.971 10:47:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.971 10:47:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.971 ************************************ 00:04:08.971 START TEST env_mem_callbacks 00:04:08.971 ************************************ 00:04:08.971 10:47:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:08.971 EAL: Detected CPU lcores: 10 00:04:08.971 EAL: Detected NUMA nodes: 1 00:04:08.971 EAL: Detected shared linkage of DPDK 00:04:08.971 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.971 EAL: Selected IOVA mode 'PA' 00:04:09.230 00:04:09.230 00:04:09.230 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.230 http://cunit.sourceforge.net/ 00:04:09.230 00:04:09.230 00:04:09.230 Suite: memory 00:04:09.230 Test: test ... 00:04:09.230 register 0x200000200000 2097152 00:04:09.230 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:09.230 malloc 3145728 00:04:09.230 register 0x200000400000 4194304 00:04:09.230 buf 0x200000500000 len 3145728 PASSED 00:04:09.230 malloc 64 00:04:09.230 buf 0x2000004fff40 len 64 PASSED 00:04:09.230 malloc 4194304 00:04:09.230 register 0x200000800000 6291456 00:04:09.230 buf 0x200000a00000 len 4194304 PASSED 00:04:09.230 free 0x200000500000 3145728 00:04:09.230 free 0x2000004fff40 64 00:04:09.230 unregister 0x200000400000 4194304 PASSED 00:04:09.230 free 0x200000a00000 4194304 00:04:09.230 unregister 0x200000800000 6291456 PASSED 00:04:09.230 malloc 8388608 00:04:09.230 register 0x200000400000 10485760 00:04:09.230 buf 0x200000600000 len 8388608 PASSED 00:04:09.230 free 0x200000600000 8388608 00:04:09.230 unregister 0x200000400000 10485760 PASSED 00:04:09.230 passed 00:04:09.230 00:04:09.230 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.230 suites 1 1 n/a 0 0 00:04:09.230 tests 1 1 1 0 0 00:04:09.230 asserts 15 15 15 0 n/a 00:04:09.230 00:04:09.230 Elapsed time = 0.012 seconds 00:04:09.230 00:04:09.230 real 0m0.154s 00:04:09.230 user 0m0.025s 00:04:09.230 sys 0m0.027s 00:04:09.230 10:47:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.230 10:47:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:09.230 ************************************ 00:04:09.230 END TEST env_mem_callbacks 00:04:09.230 ************************************ 00:04:09.230 00:04:09.230 real 0m2.687s 00:04:09.230 user 0m1.473s 00:04:09.230 sys 0m0.897s 00:04:09.230 10:47:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.230 10:47:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.230 ************************************ 00:04:09.230 END TEST env 00:04:09.230 ************************************ 00:04:09.230 10:47:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:09.231 10:47:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.231 10:47:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.231 10:47:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.231 ************************************ 00:04:09.231 START TEST rpc 00:04:09.231 ************************************ 00:04:09.231 10:47:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:09.231 * Looking for test storage... 00:04:09.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.231 10:47:02 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:09.231 10:47:02 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:09.231 10:47:02 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:09.488 10:47:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:09.488 10:47:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.488 10:47:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:09.488 10:47:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:09.488 10:47:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:09.488 10:47:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:09.488 10:47:02 rpc -- scripts/common.sh@345 -- # : 1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:09.488 10:47:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.488 10:47:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@353 -- # local d=1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.488 10:47:02 rpc -- scripts/common.sh@355 -- # echo 1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:09.488 10:47:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@353 -- # local d=2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.488 10:47:02 rpc -- scripts/common.sh@355 -- # echo 2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:09.488 10:47:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:09.488 10:47:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:09.488 10:47:02 rpc -- scripts/common.sh@368 -- # return 0 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.488 --rc genhtml_branch_coverage=1 00:04:09.488 --rc genhtml_function_coverage=1 00:04:09.488 --rc genhtml_legend=1 00:04:09.488 --rc geninfo_all_blocks=1 00:04:09.488 --rc geninfo_unexecuted_blocks=1 00:04:09.488 00:04:09.488 ' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.488 --rc genhtml_branch_coverage=1 00:04:09.488 --rc genhtml_function_coverage=1 00:04:09.488 --rc genhtml_legend=1 00:04:09.488 --rc geninfo_all_blocks=1 00:04:09.488 --rc geninfo_unexecuted_blocks=1 00:04:09.488 00:04:09.488 ' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.488 --rc genhtml_branch_coverage=1 00:04:09.488 --rc genhtml_function_coverage=1 00:04:09.488 --rc genhtml_legend=1 00:04:09.488 --rc geninfo_all_blocks=1 00:04:09.488 --rc geninfo_unexecuted_blocks=1 00:04:09.488 00:04:09.488 ' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:09.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.488 --rc genhtml_branch_coverage=1 00:04:09.488 --rc genhtml_function_coverage=1 00:04:09.488 --rc genhtml_legend=1 00:04:09.488 --rc geninfo_all_blocks=1 00:04:09.488 --rc geninfo_unexecuted_blocks=1 00:04:09.488 00:04:09.488 ' 00:04:09.488 10:47:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:09.488 10:47:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56951 00:04:09.488 10:47:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.488 10:47:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56951 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 56951 ']' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.488 10:47:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.488 [2024-12-09 10:47:02.557750] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:09.488 [2024-12-09 10:47:02.557846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56951 ] 00:04:09.747 [2024-12-09 10:47:02.698352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.747 [2024-12-09 10:47:02.755110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:09.747 [2024-12-09 10:47:02.755165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56951' to capture a snapshot of events at runtime. 00:04:09.747 [2024-12-09 10:47:02.755171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:09.747 [2024-12-09 10:47:02.755176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:09.747 [2024-12-09 10:47:02.755182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56951 for offline analysis/debug. 00:04:09.747 [2024-12-09 10:47:02.755560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.747 [2024-12-09 10:47:02.816875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:10.685 10:47:03 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.685 10:47:03 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:10.685 10:47:03 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.685 10:47:03 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.685 10:47:03 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:10.685 10:47:03 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:10.685 10:47:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.685 10:47:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.686 10:47:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 ************************************ 00:04:10.686 START TEST rpc_integrity 00:04:10.686 ************************************ 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.686 { 00:04:10.686 "name": "Malloc0", 00:04:10.686 "aliases": [ 00:04:10.686 "822634d8-76ff-4b79-9f0e-d38166eb5df5" 00:04:10.686 ], 00:04:10.686 "product_name": "Malloc disk", 00:04:10.686 "block_size": 512, 00:04:10.686 "num_blocks": 16384, 00:04:10.686 "uuid": "822634d8-76ff-4b79-9f0e-d38166eb5df5", 00:04:10.686 "assigned_rate_limits": { 00:04:10.686 "rw_ios_per_sec": 0, 00:04:10.686 "rw_mbytes_per_sec": 0, 00:04:10.686 "r_mbytes_per_sec": 0, 00:04:10.686 "w_mbytes_per_sec": 0 00:04:10.686 }, 00:04:10.686 "claimed": false, 00:04:10.686 "zoned": false, 00:04:10.686 "supported_io_types": { 00:04:10.686 "read": true, 00:04:10.686 "write": true, 00:04:10.686 "unmap": true, 00:04:10.686 "flush": true, 00:04:10.686 "reset": true, 00:04:10.686 "nvme_admin": false, 00:04:10.686 "nvme_io": false, 00:04:10.686 "nvme_io_md": false, 00:04:10.686 "write_zeroes": true, 00:04:10.686 "zcopy": true, 00:04:10.686 "get_zone_info": false, 00:04:10.686 "zone_management": false, 00:04:10.686 "zone_append": false, 00:04:10.686 "compare": false, 00:04:10.686 "compare_and_write": false, 00:04:10.686 "abort": true, 00:04:10.686 "seek_hole": false, 00:04:10.686 "seek_data": false, 00:04:10.686 "copy": true, 00:04:10.686 "nvme_iov_md": false 00:04:10.686 }, 00:04:10.686 "memory_domains": [ 00:04:10.686 { 00:04:10.686 "dma_device_id": "system", 00:04:10.686 "dma_device_type": 1 00:04:10.686 }, 00:04:10.686 { 00:04:10.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.686 "dma_device_type": 2 00:04:10.686 } 00:04:10.686 ], 00:04:10.686 "driver_specific": {} 00:04:10.686 } 00:04:10.686 ]' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 [2024-12-09 10:47:03.704262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:10.686 [2024-12-09 10:47:03.704303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.686 [2024-12-09 10:47:03.704320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1468cb0 00:04:10.686 [2024-12-09 10:47:03.704327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.686 [2024-12-09 10:47:03.705670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.686 [2024-12-09 10:47:03.705697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.686 Passthru0 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.686 { 00:04:10.686 "name": "Malloc0", 00:04:10.686 "aliases": [ 00:04:10.686 "822634d8-76ff-4b79-9f0e-d38166eb5df5" 00:04:10.686 ], 00:04:10.686 "product_name": "Malloc disk", 00:04:10.686 "block_size": 512, 00:04:10.686 "num_blocks": 16384, 00:04:10.686 "uuid": "822634d8-76ff-4b79-9f0e-d38166eb5df5", 00:04:10.686 "assigned_rate_limits": { 00:04:10.686 "rw_ios_per_sec": 0, 00:04:10.686 "rw_mbytes_per_sec": 0, 00:04:10.686 "r_mbytes_per_sec": 0, 00:04:10.686 "w_mbytes_per_sec": 0 00:04:10.686 }, 00:04:10.686 "claimed": true, 00:04:10.686 "claim_type": "exclusive_write", 00:04:10.686 "zoned": false, 00:04:10.686 "supported_io_types": { 00:04:10.686 "read": true, 00:04:10.686 "write": true, 00:04:10.686 "unmap": true, 00:04:10.686 "flush": true, 00:04:10.686 "reset": true, 00:04:10.686 "nvme_admin": false, 00:04:10.686 "nvme_io": false, 00:04:10.686 "nvme_io_md": false, 00:04:10.686 "write_zeroes": true, 00:04:10.686 "zcopy": true, 00:04:10.686 "get_zone_info": false, 00:04:10.686 "zone_management": false, 00:04:10.686 "zone_append": false, 00:04:10.686 "compare": false, 00:04:10.686 "compare_and_write": false, 00:04:10.686 "abort": true, 00:04:10.686 "seek_hole": false, 00:04:10.686 "seek_data": false, 00:04:10.686 "copy": true, 00:04:10.686 "nvme_iov_md": false 00:04:10.686 }, 00:04:10.686 "memory_domains": [ 00:04:10.686 { 00:04:10.686 "dma_device_id": "system", 00:04:10.686 "dma_device_type": 1 00:04:10.686 }, 00:04:10.686 { 00:04:10.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.686 "dma_device_type": 2 00:04:10.686 } 00:04:10.686 ], 00:04:10.686 "driver_specific": {} 00:04:10.686 }, 00:04:10.686 { 00:04:10.686 "name": "Passthru0", 00:04:10.686 "aliases": [ 00:04:10.686 "ffbc18f2-99be-546c-9530-e2d1995eed80" 00:04:10.686 ], 00:04:10.686 "product_name": "passthru", 00:04:10.686 "block_size": 512, 00:04:10.686 "num_blocks": 16384, 00:04:10.686 "uuid": "ffbc18f2-99be-546c-9530-e2d1995eed80", 00:04:10.686 "assigned_rate_limits": { 00:04:10.686 "rw_ios_per_sec": 0, 00:04:10.686 "rw_mbytes_per_sec": 0, 00:04:10.686 "r_mbytes_per_sec": 0, 00:04:10.686 "w_mbytes_per_sec": 0 00:04:10.686 }, 00:04:10.686 "claimed": false, 00:04:10.686 "zoned": false, 00:04:10.686 "supported_io_types": { 00:04:10.686 "read": true, 00:04:10.686 "write": true, 00:04:10.686 "unmap": true, 00:04:10.686 "flush": true, 00:04:10.686 "reset": true, 00:04:10.686 "nvme_admin": false, 00:04:10.686 "nvme_io": false, 00:04:10.686 "nvme_io_md": false, 00:04:10.686 "write_zeroes": true, 00:04:10.686 "zcopy": true, 00:04:10.686 "get_zone_info": false, 00:04:10.686 "zone_management": false, 00:04:10.686 "zone_append": false, 00:04:10.686 "compare": false, 00:04:10.686 "compare_and_write": false, 00:04:10.686 "abort": true, 00:04:10.686 "seek_hole": false, 00:04:10.686 "seek_data": false, 00:04:10.686 "copy": true, 00:04:10.686 "nvme_iov_md": false 00:04:10.686 }, 00:04:10.686 "memory_domains": [ 00:04:10.686 { 00:04:10.686 "dma_device_id": "system", 00:04:10.686 "dma_device_type": 1 00:04:10.686 }, 00:04:10.686 { 00:04:10.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.686 "dma_device_type": 2 00:04:10.686 } 00:04:10.686 ], 00:04:10.686 "driver_specific": { 00:04:10.686 "passthru": { 00:04:10.686 "name": "Passthru0", 00:04:10.686 "base_bdev_name": "Malloc0" 00:04:10.686 } 00:04:10.686 } 00:04:10.686 } 00:04:10.686 ]' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.686 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.686 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.945 10:47:03 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.945 00:04:10.945 real 0m0.321s 00:04:10.945 user 0m0.192s 00:04:10.945 sys 0m0.064s 00:04:10.945 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.945 10:47:03 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.945 ************************************ 00:04:10.945 END TEST rpc_integrity 00:04:10.945 ************************************ 00:04:10.945 10:47:03 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:10.945 10:47:03 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.945 10:47:03 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.945 10:47:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.945 ************************************ 00:04:10.945 START TEST rpc_plugins 00:04:10.945 ************************************ 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:10.945 10:47:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.945 10:47:03 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:10.945 10:47:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.945 10:47:03 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.945 10:47:03 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:10.945 { 00:04:10.945 "name": "Malloc1", 00:04:10.945 "aliases": [ 00:04:10.945 "875efef6-c60e-42b5-bc33-2895a0df66fe" 00:04:10.945 ], 00:04:10.945 "product_name": "Malloc disk", 00:04:10.945 "block_size": 4096, 00:04:10.945 "num_blocks": 256, 00:04:10.945 "uuid": "875efef6-c60e-42b5-bc33-2895a0df66fe", 00:04:10.945 "assigned_rate_limits": { 00:04:10.945 "rw_ios_per_sec": 0, 00:04:10.945 "rw_mbytes_per_sec": 0, 00:04:10.945 "r_mbytes_per_sec": 0, 00:04:10.945 "w_mbytes_per_sec": 0 00:04:10.945 }, 00:04:10.946 "claimed": false, 00:04:10.946 "zoned": false, 00:04:10.946 "supported_io_types": { 00:04:10.946 "read": true, 00:04:10.946 "write": true, 00:04:10.946 "unmap": true, 00:04:10.946 "flush": true, 00:04:10.946 "reset": true, 00:04:10.946 "nvme_admin": false, 00:04:10.946 "nvme_io": false, 00:04:10.946 "nvme_io_md": false, 00:04:10.946 "write_zeroes": true, 00:04:10.946 "zcopy": true, 00:04:10.946 "get_zone_info": false, 00:04:10.946 "zone_management": false, 00:04:10.946 "zone_append": false, 00:04:10.946 "compare": false, 00:04:10.946 "compare_and_write": false, 00:04:10.946 "abort": true, 00:04:10.946 "seek_hole": false, 00:04:10.946 "seek_data": false, 00:04:10.946 "copy": true, 00:04:10.946 "nvme_iov_md": false 00:04:10.946 }, 00:04:10.946 "memory_domains": [ 00:04:10.946 { 00:04:10.946 "dma_device_id": "system", 00:04:10.946 "dma_device_type": 1 00:04:10.946 }, 00:04:10.946 { 00:04:10.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.946 "dma_device_type": 2 00:04:10.946 } 00:04:10.946 ], 00:04:10.946 "driver_specific": {} 00:04:10.946 } 00:04:10.946 ]' 00:04:10.946 10:47:03 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:10.946 10:47:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:10.946 10:47:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.946 10:47:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:10.946 10:47:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:10.946 10:47:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:10.946 10:47:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:10.946 00:04:10.946 real 0m0.162s 00:04:10.946 user 0m0.098s 00:04:10.946 sys 0m0.024s 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.946 10:47:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:10.946 ************************************ 00:04:10.946 END TEST rpc_plugins 00:04:10.946 ************************************ 00:04:11.204 10:47:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.204 10:47:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.204 10:47:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.204 10:47:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.204 ************************************ 00:04:11.204 START TEST rpc_trace_cmd_test 00:04:11.204 ************************************ 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:11.204 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56951", 00:04:11.204 "tpoint_group_mask": "0x8", 00:04:11.204 "iscsi_conn": { 00:04:11.204 "mask": "0x2", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "scsi": { 00:04:11.204 "mask": "0x4", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "bdev": { 00:04:11.204 "mask": "0x8", 00:04:11.204 "tpoint_mask": "0xffffffffffffffff" 00:04:11.204 }, 00:04:11.204 "nvmf_rdma": { 00:04:11.204 "mask": "0x10", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "nvmf_tcp": { 00:04:11.204 "mask": "0x20", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "ftl": { 00:04:11.204 "mask": "0x40", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "blobfs": { 00:04:11.204 "mask": "0x80", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "dsa": { 00:04:11.204 "mask": "0x200", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "thread": { 00:04:11.204 "mask": "0x400", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "nvme_pcie": { 00:04:11.204 "mask": "0x800", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "iaa": { 00:04:11.204 "mask": "0x1000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "nvme_tcp": { 00:04:11.204 "mask": "0x2000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "bdev_nvme": { 00:04:11.204 "mask": "0x4000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "sock": { 00:04:11.204 "mask": "0x8000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "blob": { 00:04:11.204 "mask": "0x10000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "bdev_raid": { 00:04:11.204 "mask": "0x20000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 }, 00:04:11.204 "scheduler": { 00:04:11.204 "mask": "0x40000", 00:04:11.204 "tpoint_mask": "0x0" 00:04:11.204 } 00:04:11.204 }' 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.204 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.461 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.461 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.461 10:47:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.461 00:04:11.461 real 0m0.273s 00:04:11.461 user 0m0.227s 00:04:11.461 sys 0m0.034s 00:04:11.461 10:47:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.462 10:47:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.462 ************************************ 00:04:11.462 END TEST rpc_trace_cmd_test 00:04:11.462 ************************************ 00:04:11.462 10:47:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:11.462 10:47:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:11.462 10:47:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:11.462 10:47:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.462 10:47:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.462 10:47:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.462 ************************************ 00:04:11.462 START TEST rpc_daemon_integrity 00:04:11.462 ************************************ 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.462 { 00:04:11.462 "name": "Malloc2", 00:04:11.462 "aliases": [ 00:04:11.462 "74b69576-cdba-411e-8fde-80e68e2b0a4b" 00:04:11.462 ], 00:04:11.462 "product_name": "Malloc disk", 00:04:11.462 "block_size": 512, 00:04:11.462 "num_blocks": 16384, 00:04:11.462 "uuid": "74b69576-cdba-411e-8fde-80e68e2b0a4b", 00:04:11.462 "assigned_rate_limits": { 00:04:11.462 "rw_ios_per_sec": 0, 00:04:11.462 "rw_mbytes_per_sec": 0, 00:04:11.462 "r_mbytes_per_sec": 0, 00:04:11.462 "w_mbytes_per_sec": 0 00:04:11.462 }, 00:04:11.462 "claimed": false, 00:04:11.462 "zoned": false, 00:04:11.462 "supported_io_types": { 00:04:11.462 "read": true, 00:04:11.462 "write": true, 00:04:11.462 "unmap": true, 00:04:11.462 "flush": true, 00:04:11.462 "reset": true, 00:04:11.462 "nvme_admin": false, 00:04:11.462 "nvme_io": false, 00:04:11.462 "nvme_io_md": false, 00:04:11.462 "write_zeroes": true, 00:04:11.462 "zcopy": true, 00:04:11.462 "get_zone_info": false, 00:04:11.462 "zone_management": false, 00:04:11.462 "zone_append": false, 00:04:11.462 "compare": false, 00:04:11.462 "compare_and_write": false, 00:04:11.462 "abort": true, 00:04:11.462 "seek_hole": false, 00:04:11.462 "seek_data": false, 00:04:11.462 "copy": true, 00:04:11.462 "nvme_iov_md": false 00:04:11.462 }, 00:04:11.462 "memory_domains": [ 00:04:11.462 { 00:04:11.462 "dma_device_id": "system", 00:04:11.462 "dma_device_type": 1 00:04:11.462 }, 00:04:11.462 { 00:04:11.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.462 "dma_device_type": 2 00:04:11.462 } 00:04:11.462 ], 00:04:11.462 "driver_specific": {} 00:04:11.462 } 00:04:11.462 ]' 00:04:11.462 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.721 [2024-12-09 10:47:04.666724] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:11.721 [2024-12-09 10:47:04.666771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.721 [2024-12-09 10:47:04.666784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14cc270 00:04:11.721 [2024-12-09 10:47:04.666790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.721 [2024-12-09 10:47:04.668243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.721 [2024-12-09 10:47:04.668276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.721 Passthru0 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.721 { 00:04:11.721 "name": "Malloc2", 00:04:11.721 "aliases": [ 00:04:11.721 "74b69576-cdba-411e-8fde-80e68e2b0a4b" 00:04:11.721 ], 00:04:11.721 "product_name": "Malloc disk", 00:04:11.721 "block_size": 512, 00:04:11.721 "num_blocks": 16384, 00:04:11.721 "uuid": "74b69576-cdba-411e-8fde-80e68e2b0a4b", 00:04:11.721 "assigned_rate_limits": { 00:04:11.721 "rw_ios_per_sec": 0, 00:04:11.721 "rw_mbytes_per_sec": 0, 00:04:11.721 "r_mbytes_per_sec": 0, 00:04:11.721 "w_mbytes_per_sec": 0 00:04:11.721 }, 00:04:11.721 "claimed": true, 00:04:11.721 "claim_type": "exclusive_write", 00:04:11.721 "zoned": false, 00:04:11.721 "supported_io_types": { 00:04:11.721 "read": true, 00:04:11.721 "write": true, 00:04:11.721 "unmap": true, 00:04:11.721 "flush": true, 00:04:11.721 "reset": true, 00:04:11.721 "nvme_admin": false, 00:04:11.721 "nvme_io": false, 00:04:11.721 "nvme_io_md": false, 00:04:11.721 "write_zeroes": true, 00:04:11.721 "zcopy": true, 00:04:11.721 "get_zone_info": false, 00:04:11.721 "zone_management": false, 00:04:11.721 "zone_append": false, 00:04:11.721 "compare": false, 00:04:11.721 "compare_and_write": false, 00:04:11.721 "abort": true, 00:04:11.721 "seek_hole": false, 00:04:11.721 "seek_data": false, 00:04:11.721 "copy": true, 00:04:11.721 "nvme_iov_md": false 00:04:11.721 }, 00:04:11.721 "memory_domains": [ 00:04:11.721 { 00:04:11.721 "dma_device_id": "system", 00:04:11.721 "dma_device_type": 1 00:04:11.721 }, 00:04:11.721 { 00:04:11.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.721 "dma_device_type": 2 00:04:11.721 } 00:04:11.721 ], 00:04:11.721 "driver_specific": {} 00:04:11.721 }, 00:04:11.721 { 00:04:11.721 "name": "Passthru0", 00:04:11.721 "aliases": [ 00:04:11.721 "5fb2eca9-a2f0-556d-bdfd-5ef2d2fc8ee9" 00:04:11.721 ], 00:04:11.721 "product_name": "passthru", 00:04:11.721 "block_size": 512, 00:04:11.721 "num_blocks": 16384, 00:04:11.721 "uuid": "5fb2eca9-a2f0-556d-bdfd-5ef2d2fc8ee9", 00:04:11.721 "assigned_rate_limits": { 00:04:11.721 "rw_ios_per_sec": 0, 00:04:11.721 "rw_mbytes_per_sec": 0, 00:04:11.721 "r_mbytes_per_sec": 0, 00:04:11.721 "w_mbytes_per_sec": 0 00:04:11.721 }, 00:04:11.721 "claimed": false, 00:04:11.721 "zoned": false, 00:04:11.721 "supported_io_types": { 00:04:11.721 "read": true, 00:04:11.721 "write": true, 00:04:11.721 "unmap": true, 00:04:11.721 "flush": true, 00:04:11.721 "reset": true, 00:04:11.721 "nvme_admin": false, 00:04:11.721 "nvme_io": false, 00:04:11.721 "nvme_io_md": false, 00:04:11.721 "write_zeroes": true, 00:04:11.721 "zcopy": true, 00:04:11.721 "get_zone_info": false, 00:04:11.721 "zone_management": false, 00:04:11.721 "zone_append": false, 00:04:11.721 "compare": false, 00:04:11.721 "compare_and_write": false, 00:04:11.721 "abort": true, 00:04:11.721 "seek_hole": false, 00:04:11.721 "seek_data": false, 00:04:11.721 "copy": true, 00:04:11.721 "nvme_iov_md": false 00:04:11.721 }, 00:04:11.721 "memory_domains": [ 00:04:11.721 { 00:04:11.721 "dma_device_id": "system", 00:04:11.721 "dma_device_type": 1 00:04:11.721 }, 00:04:11.721 { 00:04:11.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.721 "dma_device_type": 2 00:04:11.721 } 00:04:11.721 ], 00:04:11.721 "driver_specific": { 00:04:11.721 "passthru": { 00:04:11.721 "name": "Passthru0", 00:04:11.721 "base_bdev_name": "Malloc2" 00:04:11.721 } 00:04:11.721 } 00:04:11.721 } 00:04:11.721 ]' 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.721 00:04:11.721 real 0m0.328s 00:04:11.721 user 0m0.199s 00:04:11.721 sys 0m0.060s 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.721 10:47:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.721 ************************************ 00:04:11.721 END TEST rpc_daemon_integrity 00:04:11.721 ************************************ 00:04:11.721 10:47:04 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:11.721 10:47:04 rpc -- rpc/rpc.sh@84 -- # killprocess 56951 00:04:11.721 10:47:04 rpc -- common/autotest_common.sh@954 -- # '[' -z 56951 ']' 00:04:11.721 10:47:04 rpc -- common/autotest_common.sh@958 -- # kill -0 56951 00:04:11.721 10:47:04 rpc -- common/autotest_common.sh@959 -- # uname 00:04:11.721 10:47:04 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:11.721 10:47:04 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56951 00:04:11.980 10:47:04 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:11.980 10:47:04 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:11.980 killing process with pid 56951 00:04:11.980 10:47:04 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56951' 00:04:11.980 10:47:04 rpc -- common/autotest_common.sh@973 -- # kill 56951 00:04:11.980 10:47:04 rpc -- common/autotest_common.sh@978 -- # wait 56951 00:04:12.238 00:04:12.238 real 0m2.980s 00:04:12.238 user 0m3.776s 00:04:12.238 sys 0m0.798s 00:04:12.238 10:47:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:12.238 10:47:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.238 ************************************ 00:04:12.238 END TEST rpc 00:04:12.238 ************************************ 00:04:12.238 10:47:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.238 10:47:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.238 10:47:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.238 10:47:05 -- common/autotest_common.sh@10 -- # set +x 00:04:12.238 ************************************ 00:04:12.238 START TEST skip_rpc 00:04:12.238 ************************************ 00:04:12.238 10:47:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.496 * Looking for test storage... 00:04:12.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.497 10:47:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.497 --rc genhtml_branch_coverage=1 00:04:12.497 --rc genhtml_function_coverage=1 00:04:12.497 --rc genhtml_legend=1 00:04:12.497 --rc geninfo_all_blocks=1 00:04:12.497 --rc geninfo_unexecuted_blocks=1 00:04:12.497 00:04:12.497 ' 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.497 --rc genhtml_branch_coverage=1 00:04:12.497 --rc genhtml_function_coverage=1 00:04:12.497 --rc genhtml_legend=1 00:04:12.497 --rc geninfo_all_blocks=1 00:04:12.497 --rc geninfo_unexecuted_blocks=1 00:04:12.497 00:04:12.497 ' 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.497 --rc genhtml_branch_coverage=1 00:04:12.497 --rc genhtml_function_coverage=1 00:04:12.497 --rc genhtml_legend=1 00:04:12.497 --rc geninfo_all_blocks=1 00:04:12.497 --rc geninfo_unexecuted_blocks=1 00:04:12.497 00:04:12.497 ' 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:12.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.497 --rc genhtml_branch_coverage=1 00:04:12.497 --rc genhtml_function_coverage=1 00:04:12.497 --rc genhtml_legend=1 00:04:12.497 --rc geninfo_all_blocks=1 00:04:12.497 --rc geninfo_unexecuted_blocks=1 00:04:12.497 00:04:12.497 ' 00:04:12.497 10:47:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.497 10:47:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.497 10:47:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:12.497 10:47:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.497 ************************************ 00:04:12.497 START TEST skip_rpc 00:04:12.497 ************************************ 00:04:12.497 10:47:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:12.497 10:47:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57157 00:04:12.497 10:47:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:12.497 10:47:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.497 10:47:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:12.497 [2024-12-09 10:47:05.638114] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:12.497 [2024-12-09 10:47:05.638184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57157 ] 00:04:12.755 [2024-12-09 10:47:05.791965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.755 [2024-12-09 10:47:05.843895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.755 [2024-12-09 10:47:05.900563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57157 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57157 ']' 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57157 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57157 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.136 killing process with pid 57157 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57157' 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57157 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57157 00:04:18.136 00:04:18.136 real 0m5.422s 00:04:18.136 user 0m5.108s 00:04:18.136 sys 0m0.241s 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.136 10:47:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 ************************************ 00:04:18.136 END TEST skip_rpc 00:04:18.136 ************************************ 00:04:18.136 10:47:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.136 10:47:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.136 10:47:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.136 10:47:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 ************************************ 00:04:18.136 START TEST skip_rpc_with_json 00:04:18.136 ************************************ 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57238 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57238 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57238 ']' 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.136 10:47:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.136 [2024-12-09 10:47:11.122739] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:18.136 [2024-12-09 10:47:11.122830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57238 ] 00:04:18.136 [2024-12-09 10:47:11.278026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.395 [2024-12-09 10:47:11.339178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.395 [2024-12-09 10:47:11.400818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.963 [2024-12-09 10:47:12.074630] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:18.963 request: 00:04:18.963 { 00:04:18.963 "trtype": "tcp", 00:04:18.963 "method": "nvmf_get_transports", 00:04:18.963 "req_id": 1 00:04:18.963 } 00:04:18.963 Got JSON-RPC error response 00:04:18.963 response: 00:04:18.963 { 00:04:18.963 "code": -19, 00:04:18.963 "message": "No such device" 00:04:18.963 } 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.963 [2024-12-09 10:47:12.086723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.963 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.223 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.223 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.223 { 00:04:19.223 "subsystems": [ 00:04:19.223 { 00:04:19.223 "subsystem": "fsdev", 00:04:19.223 "config": [ 00:04:19.223 { 00:04:19.223 "method": "fsdev_set_opts", 00:04:19.223 "params": { 00:04:19.223 "fsdev_io_pool_size": 65535, 00:04:19.223 "fsdev_io_cache_size": 256 00:04:19.223 } 00:04:19.223 } 00:04:19.223 ] 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "subsystem": "keyring", 00:04:19.223 "config": [] 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "subsystem": "iobuf", 00:04:19.223 "config": [ 00:04:19.223 { 00:04:19.223 "method": "iobuf_set_options", 00:04:19.223 "params": { 00:04:19.223 "small_pool_count": 8192, 00:04:19.223 "large_pool_count": 1024, 00:04:19.223 "small_bufsize": 8192, 00:04:19.223 "large_bufsize": 135168, 00:04:19.223 "enable_numa": false 00:04:19.223 } 00:04:19.223 } 00:04:19.223 ] 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "subsystem": "sock", 00:04:19.223 "config": [ 00:04:19.223 { 00:04:19.223 "method": "sock_set_default_impl", 00:04:19.223 "params": { 00:04:19.223 "impl_name": "uring" 00:04:19.223 } 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "method": "sock_impl_set_options", 00:04:19.223 "params": { 00:04:19.223 "impl_name": "ssl", 00:04:19.223 "recv_buf_size": 4096, 00:04:19.223 "send_buf_size": 4096, 00:04:19.223 "enable_recv_pipe": true, 00:04:19.223 "enable_quickack": false, 00:04:19.223 "enable_placement_id": 0, 00:04:19.223 "enable_zerocopy_send_server": true, 00:04:19.223 "enable_zerocopy_send_client": false, 00:04:19.223 "zerocopy_threshold": 0, 00:04:19.223 "tls_version": 0, 00:04:19.223 "enable_ktls": false 00:04:19.223 } 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "method": "sock_impl_set_options", 00:04:19.223 "params": { 00:04:19.223 "impl_name": "posix", 00:04:19.223 "recv_buf_size": 2097152, 00:04:19.223 "send_buf_size": 2097152, 00:04:19.223 "enable_recv_pipe": true, 00:04:19.223 "enable_quickack": false, 00:04:19.223 "enable_placement_id": 0, 00:04:19.223 "enable_zerocopy_send_server": true, 00:04:19.223 "enable_zerocopy_send_client": false, 00:04:19.223 "zerocopy_threshold": 0, 00:04:19.223 "tls_version": 0, 00:04:19.223 "enable_ktls": false 00:04:19.223 } 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "method": "sock_impl_set_options", 00:04:19.223 "params": { 00:04:19.223 "impl_name": "uring", 00:04:19.223 "recv_buf_size": 2097152, 00:04:19.223 "send_buf_size": 2097152, 00:04:19.223 "enable_recv_pipe": true, 00:04:19.223 "enable_quickack": false, 00:04:19.223 "enable_placement_id": 0, 00:04:19.223 "enable_zerocopy_send_server": false, 00:04:19.223 "enable_zerocopy_send_client": false, 00:04:19.223 "zerocopy_threshold": 0, 00:04:19.223 "tls_version": 0, 00:04:19.223 "enable_ktls": false 00:04:19.223 } 00:04:19.223 } 00:04:19.223 ] 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "subsystem": "vmd", 00:04:19.223 "config": [] 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "subsystem": "accel", 00:04:19.223 "config": [ 00:04:19.223 { 00:04:19.223 "method": "accel_set_options", 00:04:19.223 "params": { 00:04:19.223 "small_cache_size": 128, 00:04:19.223 "large_cache_size": 16, 00:04:19.223 "task_count": 2048, 00:04:19.223 "sequence_count": 2048, 00:04:19.223 "buf_count": 2048 00:04:19.223 } 00:04:19.223 } 00:04:19.223 ] 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "subsystem": "bdev", 00:04:19.223 "config": [ 00:04:19.223 { 00:04:19.223 "method": "bdev_set_options", 00:04:19.223 "params": { 00:04:19.223 "bdev_io_pool_size": 65535, 00:04:19.223 "bdev_io_cache_size": 256, 00:04:19.223 "bdev_auto_examine": true, 00:04:19.223 "iobuf_small_cache_size": 128, 00:04:19.223 "iobuf_large_cache_size": 16 00:04:19.223 } 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "method": "bdev_raid_set_options", 00:04:19.223 "params": { 00:04:19.223 "process_window_size_kb": 1024, 00:04:19.223 "process_max_bandwidth_mb_sec": 0 00:04:19.223 } 00:04:19.223 }, 00:04:19.223 { 00:04:19.223 "method": "bdev_iscsi_set_options", 00:04:19.223 "params": { 00:04:19.223 "timeout_sec": 30 00:04:19.223 } 00:04:19.223 }, 00:04:19.223 { 00:04:19.224 "method": "bdev_nvme_set_options", 00:04:19.224 "params": { 00:04:19.224 "action_on_timeout": "none", 00:04:19.224 "timeout_us": 0, 00:04:19.224 "timeout_admin_us": 0, 00:04:19.224 "keep_alive_timeout_ms": 10000, 00:04:19.224 "arbitration_burst": 0, 00:04:19.224 "low_priority_weight": 0, 00:04:19.224 "medium_priority_weight": 0, 00:04:19.224 "high_priority_weight": 0, 00:04:19.224 "nvme_adminq_poll_period_us": 10000, 00:04:19.224 "nvme_ioq_poll_period_us": 0, 00:04:19.224 "io_queue_requests": 0, 00:04:19.224 "delay_cmd_submit": true, 00:04:19.224 "transport_retry_count": 4, 00:04:19.224 "bdev_retry_count": 3, 00:04:19.224 "transport_ack_timeout": 0, 00:04:19.224 "ctrlr_loss_timeout_sec": 0, 00:04:19.224 "reconnect_delay_sec": 0, 00:04:19.224 "fast_io_fail_timeout_sec": 0, 00:04:19.224 "disable_auto_failback": false, 00:04:19.224 "generate_uuids": false, 00:04:19.224 "transport_tos": 0, 00:04:19.224 "nvme_error_stat": false, 00:04:19.224 "rdma_srq_size": 0, 00:04:19.224 "io_path_stat": false, 00:04:19.224 "allow_accel_sequence": false, 00:04:19.224 "rdma_max_cq_size": 0, 00:04:19.224 "rdma_cm_event_timeout_ms": 0, 00:04:19.224 "dhchap_digests": [ 00:04:19.224 "sha256", 00:04:19.224 "sha384", 00:04:19.224 "sha512" 00:04:19.224 ], 00:04:19.224 "dhchap_dhgroups": [ 00:04:19.224 "null", 00:04:19.224 "ffdhe2048", 00:04:19.224 "ffdhe3072", 00:04:19.224 "ffdhe4096", 00:04:19.224 "ffdhe6144", 00:04:19.224 "ffdhe8192" 00:04:19.224 ] 00:04:19.224 } 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "method": "bdev_nvme_set_hotplug", 00:04:19.224 "params": { 00:04:19.224 "period_us": 100000, 00:04:19.224 "enable": false 00:04:19.224 } 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "method": "bdev_wait_for_examine" 00:04:19.224 } 00:04:19.224 ] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "scsi", 00:04:19.224 "config": null 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "scheduler", 00:04:19.224 "config": [ 00:04:19.224 { 00:04:19.224 "method": "framework_set_scheduler", 00:04:19.224 "params": { 00:04:19.224 "name": "static" 00:04:19.224 } 00:04:19.224 } 00:04:19.224 ] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "vhost_scsi", 00:04:19.224 "config": [] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "vhost_blk", 00:04:19.224 "config": [] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "ublk", 00:04:19.224 "config": [] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "nbd", 00:04:19.224 "config": [] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "nvmf", 00:04:19.224 "config": [ 00:04:19.224 { 00:04:19.224 "method": "nvmf_set_config", 00:04:19.224 "params": { 00:04:19.224 "discovery_filter": "match_any", 00:04:19.224 "admin_cmd_passthru": { 00:04:19.224 "identify_ctrlr": false 00:04:19.224 }, 00:04:19.224 "dhchap_digests": [ 00:04:19.224 "sha256", 00:04:19.224 "sha384", 00:04:19.224 "sha512" 00:04:19.224 ], 00:04:19.224 "dhchap_dhgroups": [ 00:04:19.224 "null", 00:04:19.224 "ffdhe2048", 00:04:19.224 "ffdhe3072", 00:04:19.224 "ffdhe4096", 00:04:19.224 "ffdhe6144", 00:04:19.224 "ffdhe8192" 00:04:19.224 ] 00:04:19.224 } 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "method": "nvmf_set_max_subsystems", 00:04:19.224 "params": { 00:04:19.224 "max_subsystems": 1024 00:04:19.224 } 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "method": "nvmf_set_crdt", 00:04:19.224 "params": { 00:04:19.224 "crdt1": 0, 00:04:19.224 "crdt2": 0, 00:04:19.224 "crdt3": 0 00:04:19.224 } 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "method": "nvmf_create_transport", 00:04:19.224 "params": { 00:04:19.224 "trtype": "TCP", 00:04:19.224 "max_queue_depth": 128, 00:04:19.224 "max_io_qpairs_per_ctrlr": 127, 00:04:19.224 "in_capsule_data_size": 4096, 00:04:19.224 "max_io_size": 131072, 00:04:19.224 "io_unit_size": 131072, 00:04:19.224 "max_aq_depth": 128, 00:04:19.224 "num_shared_buffers": 511, 00:04:19.224 "buf_cache_size": 4294967295, 00:04:19.224 "dif_insert_or_strip": false, 00:04:19.224 "zcopy": false, 00:04:19.224 "c2h_success": true, 00:04:19.224 "sock_priority": 0, 00:04:19.224 "abort_timeout_sec": 1, 00:04:19.224 "ack_timeout": 0, 00:04:19.224 "data_wr_pool_size": 0 00:04:19.224 } 00:04:19.224 } 00:04:19.224 ] 00:04:19.224 }, 00:04:19.224 { 00:04:19.224 "subsystem": "iscsi", 00:04:19.224 "config": [ 00:04:19.224 { 00:04:19.224 "method": "iscsi_set_options", 00:04:19.224 "params": { 00:04:19.224 "node_base": "iqn.2016-06.io.spdk", 00:04:19.224 "max_sessions": 128, 00:04:19.224 "max_connections_per_session": 2, 00:04:19.224 "max_queue_depth": 64, 00:04:19.224 "default_time2wait": 2, 00:04:19.224 "default_time2retain": 20, 00:04:19.224 "first_burst_length": 8192, 00:04:19.224 "immediate_data": true, 00:04:19.224 "allow_duplicated_isid": false, 00:04:19.224 "error_recovery_level": 0, 00:04:19.224 "nop_timeout": 60, 00:04:19.224 "nop_in_interval": 30, 00:04:19.224 "disable_chap": false, 00:04:19.224 "require_chap": false, 00:04:19.224 "mutual_chap": false, 00:04:19.224 "chap_group": 0, 00:04:19.224 "max_large_datain_per_connection": 64, 00:04:19.224 "max_r2t_per_connection": 4, 00:04:19.224 "pdu_pool_size": 36864, 00:04:19.224 "immediate_data_pool_size": 16384, 00:04:19.224 "data_out_pool_size": 2048 00:04:19.224 } 00:04:19.224 } 00:04:19.224 ] 00:04:19.224 } 00:04:19.224 ] 00:04:19.224 } 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57238 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57238 ']' 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57238 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57238 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.224 killing process with pid 57238 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57238' 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57238 00:04:19.224 10:47:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57238 00:04:19.483 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57271 00:04:19.483 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.483 10:47:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57271 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57271 ']' 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57271 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57271 00:04:24.779 killing process with pid 57271 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57271' 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57271 00:04:24.779 10:47:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57271 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.037 00:04:25.037 real 0m7.016s 00:04:25.037 user 0m6.796s 00:04:25.037 sys 0m0.600s 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.037 ************************************ 00:04:25.037 END TEST skip_rpc_with_json 00:04:25.037 ************************************ 00:04:25.037 10:47:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:25.037 10:47:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.037 10:47:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.037 10:47:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.037 ************************************ 00:04:25.037 START TEST skip_rpc_with_delay 00:04:25.037 ************************************ 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:25.037 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.295 [2024-12-09 10:47:18.222300] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.295 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:25.295 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.295 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.295 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.295 00:04:25.295 real 0m0.092s 00:04:25.295 user 0m0.047s 00:04:25.295 sys 0m0.043s 00:04:25.295 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.295 10:47:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.295 ************************************ 00:04:25.295 END TEST skip_rpc_with_delay 00:04:25.295 ************************************ 00:04:25.295 10:47:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.295 10:47:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:25.295 10:47:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:25.295 10:47:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.295 10:47:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.295 10:47:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.295 ************************************ 00:04:25.295 START TEST exit_on_failed_rpc_init 00:04:25.295 ************************************ 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57375 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57375 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57375 ']' 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.295 10:47:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:25.295 [2024-12-09 10:47:18.381431] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:25.295 [2024-12-09 10:47:18.381654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57375 ] 00:04:25.554 [2024-12-09 10:47:18.519714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.554 [2024-12-09 10:47:18.577621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.554 [2024-12-09 10:47:18.641107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:26.498 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:26.498 [2024-12-09 10:47:19.378952] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:26.498 [2024-12-09 10:47:19.379120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57393 ] 00:04:26.498 [2024-12-09 10:47:19.517252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.498 [2024-12-09 10:47:19.575721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.498 [2024-12-09 10:47:19.575917] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:26.498 [2024-12-09 10:47:19.575969] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:26.498 [2024-12-09 10:47:19.575998] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57375 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57375 ']' 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57375 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57375 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57375' 00:04:26.774 killing process with pid 57375 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57375 00:04:26.774 10:47:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57375 00:04:27.042 00:04:27.042 real 0m1.795s 00:04:27.042 user 0m2.078s 00:04:27.042 sys 0m0.387s 00:04:27.042 10:47:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.042 ************************************ 00:04:27.042 END TEST exit_on_failed_rpc_init 00:04:27.042 ************************************ 00:04:27.042 10:47:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.042 10:47:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.042 00:04:27.042 real 0m14.842s 00:04:27.042 user 0m14.236s 00:04:27.042 sys 0m1.601s 00:04:27.042 ************************************ 00:04:27.042 END TEST skip_rpc 00:04:27.042 ************************************ 00:04:27.042 10:47:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.042 10:47:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.300 10:47:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:27.300 10:47:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.300 10:47:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.300 10:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:27.300 ************************************ 00:04:27.300 START TEST rpc_client 00:04:27.300 ************************************ 00:04:27.300 10:47:20 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:27.300 * Looking for test storage... 00:04:27.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:27.300 10:47:20 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.300 10:47:20 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.300 10:47:20 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.300 10:47:20 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.300 10:47:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:27.558 10:47:20 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.558 10:47:20 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.558 --rc genhtml_branch_coverage=1 00:04:27.558 --rc genhtml_function_coverage=1 00:04:27.558 --rc genhtml_legend=1 00:04:27.558 --rc geninfo_all_blocks=1 00:04:27.558 --rc geninfo_unexecuted_blocks=1 00:04:27.558 00:04:27.558 ' 00:04:27.558 10:47:20 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.558 --rc genhtml_branch_coverage=1 00:04:27.559 --rc genhtml_function_coverage=1 00:04:27.559 --rc genhtml_legend=1 00:04:27.559 --rc geninfo_all_blocks=1 00:04:27.559 --rc geninfo_unexecuted_blocks=1 00:04:27.559 00:04:27.559 ' 00:04:27.559 10:47:20 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.559 --rc genhtml_branch_coverage=1 00:04:27.559 --rc genhtml_function_coverage=1 00:04:27.559 --rc genhtml_legend=1 00:04:27.559 --rc geninfo_all_blocks=1 00:04:27.559 --rc geninfo_unexecuted_blocks=1 00:04:27.559 00:04:27.559 ' 00:04:27.559 10:47:20 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.559 --rc genhtml_branch_coverage=1 00:04:27.559 --rc genhtml_function_coverage=1 00:04:27.559 --rc genhtml_legend=1 00:04:27.559 --rc geninfo_all_blocks=1 00:04:27.559 --rc geninfo_unexecuted_blocks=1 00:04:27.559 00:04:27.559 ' 00:04:27.559 10:47:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:27.559 OK 00:04:27.559 10:47:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:27.559 00:04:27.559 real 0m0.271s 00:04:27.559 user 0m0.155s 00:04:27.559 sys 0m0.131s 00:04:27.559 10:47:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.559 10:47:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:27.559 ************************************ 00:04:27.559 END TEST rpc_client 00:04:27.559 ************************************ 00:04:27.559 10:47:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:27.559 10:47:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.559 10:47:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.559 10:47:20 -- common/autotest_common.sh@10 -- # set +x 00:04:27.559 ************************************ 00:04:27.559 START TEST json_config 00:04:27.559 ************************************ 00:04:27.559 10:47:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:27.559 10:47:20 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.559 10:47:20 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.559 10:47:20 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.818 10:47:20 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.818 10:47:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.818 10:47:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.818 10:47:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.818 10:47:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.818 10:47:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.818 10:47:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:27.818 10:47:20 json_config -- scripts/common.sh@345 -- # : 1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.818 10:47:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.818 10:47:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@353 -- # local d=1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.818 10:47:20 json_config -- scripts/common.sh@355 -- # echo 1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.818 10:47:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@353 -- # local d=2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.818 10:47:20 json_config -- scripts/common.sh@355 -- # echo 2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.818 10:47:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.818 10:47:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.818 10:47:20 json_config -- scripts/common.sh@368 -- # return 0 00:04:27.818 10:47:20 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.818 10:47:20 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.818 --rc genhtml_branch_coverage=1 00:04:27.818 --rc genhtml_function_coverage=1 00:04:27.818 --rc genhtml_legend=1 00:04:27.818 --rc geninfo_all_blocks=1 00:04:27.818 --rc geninfo_unexecuted_blocks=1 00:04:27.818 00:04:27.818 ' 00:04:27.818 10:47:20 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.818 --rc genhtml_branch_coverage=1 00:04:27.818 --rc genhtml_function_coverage=1 00:04:27.818 --rc genhtml_legend=1 00:04:27.818 --rc geninfo_all_blocks=1 00:04:27.818 --rc geninfo_unexecuted_blocks=1 00:04:27.818 00:04:27.818 ' 00:04:27.818 10:47:20 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.818 --rc genhtml_branch_coverage=1 00:04:27.818 --rc genhtml_function_coverage=1 00:04:27.818 --rc genhtml_legend=1 00:04:27.818 --rc geninfo_all_blocks=1 00:04:27.818 --rc geninfo_unexecuted_blocks=1 00:04:27.818 00:04:27.818 ' 00:04:27.818 10:47:20 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.818 --rc genhtml_branch_coverage=1 00:04:27.818 --rc genhtml_function_coverage=1 00:04:27.818 --rc genhtml_legend=1 00:04:27.818 --rc geninfo_all_blocks=1 00:04:27.818 --rc geninfo_unexecuted_blocks=1 00:04:27.818 00:04:27.818 ' 00:04:27.818 10:47:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:27.818 10:47:20 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:27.818 10:47:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:27.818 10:47:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:27.818 10:47:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:27.818 10:47:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:27.818 10:47:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.819 10:47:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.819 10:47:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.819 10:47:20 json_config -- paths/export.sh@5 -- # export PATH 00:04:27.819 10:47:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@51 -- # : 0 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:27.819 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:27.819 10:47:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:27.819 INFO: JSON configuration test init 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.819 Waiting for target to run... 00:04:27.819 10:47:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:27.819 10:47:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.819 10:47:20 json_config -- json_config/common.sh@10 -- # shift 00:04:27.819 10:47:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.819 10:47:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.819 10:47:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.819 10:47:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.819 10:47:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.819 10:47:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57538 00:04:27.819 10:47:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.819 10:47:20 json_config -- json_config/common.sh@25 -- # waitforlisten 57538 /var/tmp/spdk_tgt.sock 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 57538 ']' 00:04:27.819 10:47:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.819 10:47:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.819 [2024-12-09 10:47:20.915464] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:27.819 [2024-12-09 10:47:20.916015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57538 ] 00:04:28.385 [2024-12-09 10:47:21.286538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.385 [2024-12-09 10:47:21.350245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.951 10:47:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.951 10:47:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:28.951 10:47:21 json_config -- json_config/common.sh@26 -- # echo '' 00:04:28.951 00:04:28.951 10:47:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:28.951 10:47:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:28.951 10:47:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.951 10:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.951 10:47:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:28.951 10:47:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:28.951 10:47:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:28.951 10:47:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.951 10:47:21 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:28.951 10:47:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:28.951 10:47:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:29.209 [2024-12-09 10:47:22.215854] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:29.467 10:47:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.467 10:47:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:29.467 10:47:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:29.467 10:47:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@54 -- # sort 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:29.725 10:47:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.725 10:47:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:29.725 10:47:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.725 10:47:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:29.725 10:47:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.725 10:47:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:29.983 MallocForNvmf0 00:04:29.983 10:47:23 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:29.983 10:47:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.241 MallocForNvmf1 00:04:30.241 10:47:23 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.241 10:47:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.498 [2024-12-09 10:47:23.508673] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.498 10:47:23 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.498 10:47:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.756 10:47:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.756 10:47:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:31.014 10:47:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.014 10:47:23 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.271 10:47:24 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.271 10:47:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.271 [2024-12-09 10:47:24.391567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.271 10:47:24 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:31.271 10:47:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.271 10:47:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.529 10:47:24 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:31.529 10:47:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.529 10:47:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.529 10:47:24 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:31.529 10:47:24 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.529 10:47:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.787 MallocBdevForConfigChangeCheck 00:04:31.787 10:47:24 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:31.787 10:47:24 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.787 10:47:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.787 10:47:24 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:31.787 10:47:24 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.054 INFO: shutting down applications... 00:04:32.054 10:47:25 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:32.054 10:47:25 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:32.054 10:47:25 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:32.054 10:47:25 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:32.054 10:47:25 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.636 Calling clear_iscsi_subsystem 00:04:32.636 Calling clear_nvmf_subsystem 00:04:32.636 Calling clear_nbd_subsystem 00:04:32.636 Calling clear_ublk_subsystem 00:04:32.636 Calling clear_vhost_blk_subsystem 00:04:32.636 Calling clear_vhost_scsi_subsystem 00:04:32.636 Calling clear_bdev_subsystem 00:04:32.636 10:47:25 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:32.636 10:47:25 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:32.636 10:47:25 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:32.636 10:47:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.636 10:47:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.636 10:47:25 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:32.894 10:47:25 json_config -- json_config/json_config.sh@352 -- # break 00:04:32.894 10:47:25 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:32.894 10:47:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:32.894 10:47:25 json_config -- json_config/common.sh@31 -- # local app=target 00:04:32.894 10:47:25 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:32.894 10:47:25 json_config -- json_config/common.sh@35 -- # [[ -n 57538 ]] 00:04:32.894 10:47:25 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57538 00:04:32.894 10:47:25 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:32.894 10:47:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.894 10:47:25 json_config -- json_config/common.sh@41 -- # kill -0 57538 00:04:32.894 10:47:25 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.459 10:47:26 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.459 10:47:26 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.459 10:47:26 json_config -- json_config/common.sh@41 -- # kill -0 57538 00:04:33.459 10:47:26 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.459 10:47:26 json_config -- json_config/common.sh@43 -- # break 00:04:33.459 SPDK target shutdown done 00:04:33.459 INFO: relaunching applications... 00:04:33.459 10:47:26 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.459 10:47:26 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.459 10:47:26 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:33.459 10:47:26 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.459 10:47:26 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.459 10:47:26 json_config -- json_config/common.sh@10 -- # shift 00:04:33.459 10:47:26 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.459 10:47:26 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.459 Waiting for target to run... 00:04:33.459 10:47:26 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.459 10:47:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.459 10:47:26 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.459 10:47:26 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57728 00:04:33.459 10:47:26 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.459 10:47:26 json_config -- json_config/common.sh@25 -- # waitforlisten 57728 /var/tmp/spdk_tgt.sock 00:04:33.459 10:47:26 json_config -- common/autotest_common.sh@835 -- # '[' -z 57728 ']' 00:04:33.459 10:47:26 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.459 10:47:26 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.459 10:47:26 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.459 10:47:26 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.459 10:47:26 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.459 10:47:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.459 [2024-12-09 10:47:26.527236] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:33.459 [2024-12-09 10:47:26.527420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57728 ] 00:04:34.024 [2024-12-09 10:47:26.907858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.024 [2024-12-09 10:47:26.955709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.024 [2024-12-09 10:47:27.092678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:34.281 [2024-12-09 10:47:27.306575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:34.281 [2024-12-09 10:47:27.338586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.539 10:47:27 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.539 10:47:27 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:34.539 10:47:27 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.539 00:04:34.539 INFO: Checking if target configuration is the same... 00:04:34.539 10:47:27 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:34.539 10:47:27 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.539 10:47:27 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.539 10:47:27 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:34.539 10:47:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.539 + '[' 2 -ne 2 ']' 00:04:34.539 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.539 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.539 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.539 +++ basename /dev/fd/62 00:04:34.539 ++ mktemp /tmp/62.XXX 00:04:34.539 + tmp_file_1=/tmp/62.61D 00:04:34.539 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.539 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.539 + tmp_file_2=/tmp/spdk_tgt_config.json.RIQ 00:04:34.539 + ret=0 00:04:34.539 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.797 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.797 + diff -u /tmp/62.61D /tmp/spdk_tgt_config.json.RIQ 00:04:34.797 INFO: JSON config files are the same 00:04:34.797 + echo 'INFO: JSON config files are the same' 00:04:34.797 + rm /tmp/62.61D /tmp/spdk_tgt_config.json.RIQ 00:04:34.797 + exit 0 00:04:34.797 INFO: changing configuration and checking if this can be detected... 00:04:34.797 10:47:27 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:34.797 10:47:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.797 10:47:27 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.797 10:47:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.055 10:47:28 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.055 10:47:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:35.055 10:47:28 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.055 + '[' 2 -ne 2 ']' 00:04:35.055 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:35.055 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:35.055 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:35.312 +++ basename /dev/fd/62 00:04:35.312 ++ mktemp /tmp/62.XXX 00:04:35.312 + tmp_file_1=/tmp/62.kfD 00:04:35.312 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.312 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.312 + tmp_file_2=/tmp/spdk_tgt_config.json.lNo 00:04:35.312 + ret=0 00:04:35.312 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.570 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.570 + diff -u /tmp/62.kfD /tmp/spdk_tgt_config.json.lNo 00:04:35.570 + ret=1 00:04:35.570 + echo '=== Start of file: /tmp/62.kfD ===' 00:04:35.570 + cat /tmp/62.kfD 00:04:35.570 + echo '=== End of file: /tmp/62.kfD ===' 00:04:35.570 + echo '' 00:04:35.570 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lNo ===' 00:04:35.570 + cat /tmp/spdk_tgt_config.json.lNo 00:04:35.570 + echo '=== End of file: /tmp/spdk_tgt_config.json.lNo ===' 00:04:35.570 + echo '' 00:04:35.570 + rm /tmp/62.kfD /tmp/spdk_tgt_config.json.lNo 00:04:35.570 + exit 1 00:04:35.570 INFO: configuration change detected. 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:35.570 10:47:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.570 10:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@324 -- # [[ -n 57728 ]] 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.570 10:47:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.570 10:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:35.570 10:47:28 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:35.570 10:47:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.570 10:47:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.827 10:47:28 json_config -- json_config/json_config.sh@330 -- # killprocess 57728 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@954 -- # '[' -z 57728 ']' 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@958 -- # kill -0 57728 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@959 -- # uname 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57728 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.827 killing process with pid 57728 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57728' 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@973 -- # kill 57728 00:04:35.827 10:47:28 json_config -- common/autotest_common.sh@978 -- # wait 57728 00:04:36.085 10:47:29 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.085 10:47:29 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:36.085 10:47:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.085 10:47:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.085 INFO: Success 00:04:36.085 10:47:29 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:36.085 10:47:29 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:36.085 ************************************ 00:04:36.085 END TEST json_config 00:04:36.085 ************************************ 00:04:36.085 00:04:36.085 real 0m8.550s 00:04:36.085 user 0m12.131s 00:04:36.085 sys 0m1.870s 00:04:36.085 10:47:29 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.085 10:47:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.085 10:47:29 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.085 10:47:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.085 10:47:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.085 10:47:29 -- common/autotest_common.sh@10 -- # set +x 00:04:36.085 ************************************ 00:04:36.085 START TEST json_config_extra_key 00:04:36.085 ************************************ 00:04:36.085 10:47:29 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.343 10:47:29 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.343 10:47:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.343 10:47:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.343 10:47:29 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.343 10:47:29 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.344 --rc genhtml_branch_coverage=1 00:04:36.344 --rc genhtml_function_coverage=1 00:04:36.344 --rc genhtml_legend=1 00:04:36.344 --rc geninfo_all_blocks=1 00:04:36.344 --rc geninfo_unexecuted_blocks=1 00:04:36.344 00:04:36.344 ' 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.344 10:47:29 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.344 10:47:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.344 10:47:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.344 10:47:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.344 10:47:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.344 10:47:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.344 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.344 10:47:29 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.344 INFO: launching applications... 00:04:36.344 10:47:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57882 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.344 Waiting for target to run... 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57882 /var/tmp/spdk_tgt.sock 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57882 ']' 00:04:36.344 10:47:29 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.344 10:47:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.603 [2024-12-09 10:47:29.523128] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:36.603 [2024-12-09 10:47:29.523298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57882 ] 00:04:36.860 [2024-12-09 10:47:29.890244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.860 [2024-12-09 10:47:29.938381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.860 [2024-12-09 10:47:29.970293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:37.425 10:47:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.425 10:47:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.425 00:04:37.425 INFO: shutting down applications... 00:04:37.425 10:47:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.425 10:47:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57882 ]] 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57882 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:04:37.425 10:47:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57882 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:37.991 SPDK target shutdown done 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:37.991 10:47:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:37.991 Success 00:04:37.991 10:47:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:37.991 ************************************ 00:04:37.991 END TEST json_config_extra_key 00:04:37.991 ************************************ 00:04:37.991 00:04:37.991 real 0m1.787s 00:04:37.991 user 0m1.653s 00:04:37.991 sys 0m0.445s 00:04:37.991 10:47:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.991 10:47:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.991 10:47:31 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.991 10:47:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.991 10:47:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.991 10:47:31 -- common/autotest_common.sh@10 -- # set +x 00:04:37.991 ************************************ 00:04:37.991 START TEST alias_rpc 00:04:37.991 ************************************ 00:04:37.991 10:47:31 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:37.991 * Looking for test storage... 00:04:38.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.291 10:47:31 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.291 --rc genhtml_branch_coverage=1 00:04:38.291 --rc genhtml_function_coverage=1 00:04:38.291 --rc genhtml_legend=1 00:04:38.291 --rc geninfo_all_blocks=1 00:04:38.291 --rc geninfo_unexecuted_blocks=1 00:04:38.291 00:04:38.291 ' 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.291 --rc genhtml_branch_coverage=1 00:04:38.291 --rc genhtml_function_coverage=1 00:04:38.291 --rc genhtml_legend=1 00:04:38.291 --rc geninfo_all_blocks=1 00:04:38.291 --rc geninfo_unexecuted_blocks=1 00:04:38.291 00:04:38.291 ' 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.291 --rc genhtml_branch_coverage=1 00:04:38.291 --rc genhtml_function_coverage=1 00:04:38.291 --rc genhtml_legend=1 00:04:38.291 --rc geninfo_all_blocks=1 00:04:38.291 --rc geninfo_unexecuted_blocks=1 00:04:38.291 00:04:38.291 ' 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.291 --rc genhtml_branch_coverage=1 00:04:38.291 --rc genhtml_function_coverage=1 00:04:38.291 --rc genhtml_legend=1 00:04:38.291 --rc geninfo_all_blocks=1 00:04:38.291 --rc geninfo_unexecuted_blocks=1 00:04:38.291 00:04:38.291 ' 00:04:38.291 10:47:31 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.291 10:47:31 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57960 00:04:38.291 10:47:31 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.291 10:47:31 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57960 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57960 ']' 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.291 10:47:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.291 [2024-12-09 10:47:31.356124] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:38.291 [2024-12-09 10:47:31.356289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57960 ] 00:04:38.551 [2024-12-09 10:47:31.507870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.552 [2024-12-09 10:47:31.566209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.552 [2024-12-09 10:47:31.628133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:39.118 10:47:32 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.118 10:47:32 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.118 10:47:32 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:39.684 10:47:32 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57960 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57960 ']' 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57960 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57960 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.684 killing process with pid 57960 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57960' 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@973 -- # kill 57960 00:04:39.684 10:47:32 alias_rpc -- common/autotest_common.sh@978 -- # wait 57960 00:04:39.942 ************************************ 00:04:39.942 END TEST alias_rpc 00:04:39.942 ************************************ 00:04:39.942 00:04:39.942 real 0m1.961s 00:04:39.942 user 0m2.195s 00:04:39.942 sys 0m0.461s 00:04:39.942 10:47:33 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.942 10:47:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.942 10:47:33 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:39.942 10:47:33 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:39.942 10:47:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.942 10:47:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.942 10:47:33 -- common/autotest_common.sh@10 -- # set +x 00:04:39.942 ************************************ 00:04:39.942 START TEST spdkcli_tcp 00:04:39.942 ************************************ 00:04:39.942 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.200 * Looking for test storage... 00:04:40.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:40.200 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.200 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.200 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.200 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.200 10:47:33 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:40.200 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.200 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.200 --rc genhtml_branch_coverage=1 00:04:40.201 --rc genhtml_function_coverage=1 00:04:40.201 --rc genhtml_legend=1 00:04:40.201 --rc geninfo_all_blocks=1 00:04:40.201 --rc geninfo_unexecuted_blocks=1 00:04:40.201 00:04:40.201 ' 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.201 --rc genhtml_branch_coverage=1 00:04:40.201 --rc genhtml_function_coverage=1 00:04:40.201 --rc genhtml_legend=1 00:04:40.201 --rc geninfo_all_blocks=1 00:04:40.201 --rc geninfo_unexecuted_blocks=1 00:04:40.201 00:04:40.201 ' 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.201 --rc genhtml_branch_coverage=1 00:04:40.201 --rc genhtml_function_coverage=1 00:04:40.201 --rc genhtml_legend=1 00:04:40.201 --rc geninfo_all_blocks=1 00:04:40.201 --rc geninfo_unexecuted_blocks=1 00:04:40.201 00:04:40.201 ' 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.201 --rc genhtml_branch_coverage=1 00:04:40.201 --rc genhtml_function_coverage=1 00:04:40.201 --rc genhtml_legend=1 00:04:40.201 --rc geninfo_all_blocks=1 00:04:40.201 --rc geninfo_unexecuted_blocks=1 00:04:40.201 00:04:40.201 ' 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58039 00:04:40.201 10:47:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58039 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58039 ']' 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.201 10:47:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.201 [2024-12-09 10:47:33.361152] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:40.201 [2024-12-09 10:47:33.361302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58039 ] 00:04:40.459 [2024-12-09 10:47:33.514934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.459 [2024-12-09 10:47:33.574654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.459 [2024-12-09 10:47:33.574657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.459 [2024-12-09 10:47:33.637099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:41.395 10:47:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.395 10:47:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:41.395 10:47:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58056 00:04:41.395 10:47:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:41.395 10:47:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.654 [ 00:04:41.654 "bdev_malloc_delete", 00:04:41.654 "bdev_malloc_create", 00:04:41.654 "bdev_null_resize", 00:04:41.654 "bdev_null_delete", 00:04:41.654 "bdev_null_create", 00:04:41.654 "bdev_nvme_cuse_unregister", 00:04:41.654 "bdev_nvme_cuse_register", 00:04:41.654 "bdev_opal_new_user", 00:04:41.654 "bdev_opal_set_lock_state", 00:04:41.654 "bdev_opal_delete", 00:04:41.654 "bdev_opal_get_info", 00:04:41.654 "bdev_opal_create", 00:04:41.654 "bdev_nvme_opal_revert", 00:04:41.654 "bdev_nvme_opal_init", 00:04:41.654 "bdev_nvme_send_cmd", 00:04:41.654 "bdev_nvme_set_keys", 00:04:41.654 "bdev_nvme_get_path_iostat", 00:04:41.654 "bdev_nvme_get_mdns_discovery_info", 00:04:41.654 "bdev_nvme_stop_mdns_discovery", 00:04:41.654 "bdev_nvme_start_mdns_discovery", 00:04:41.654 "bdev_nvme_set_multipath_policy", 00:04:41.654 "bdev_nvme_set_preferred_path", 00:04:41.654 "bdev_nvme_get_io_paths", 00:04:41.654 "bdev_nvme_remove_error_injection", 00:04:41.654 "bdev_nvme_add_error_injection", 00:04:41.654 "bdev_nvme_get_discovery_info", 00:04:41.654 "bdev_nvme_stop_discovery", 00:04:41.654 "bdev_nvme_start_discovery", 00:04:41.654 "bdev_nvme_get_controller_health_info", 00:04:41.654 "bdev_nvme_disable_controller", 00:04:41.654 "bdev_nvme_enable_controller", 00:04:41.654 "bdev_nvme_reset_controller", 00:04:41.654 "bdev_nvme_get_transport_statistics", 00:04:41.654 "bdev_nvme_apply_firmware", 00:04:41.654 "bdev_nvme_detach_controller", 00:04:41.654 "bdev_nvme_get_controllers", 00:04:41.654 "bdev_nvme_attach_controller", 00:04:41.654 "bdev_nvme_set_hotplug", 00:04:41.654 "bdev_nvme_set_options", 00:04:41.654 "bdev_passthru_delete", 00:04:41.654 "bdev_passthru_create", 00:04:41.654 "bdev_lvol_set_parent_bdev", 00:04:41.654 "bdev_lvol_set_parent", 00:04:41.654 "bdev_lvol_check_shallow_copy", 00:04:41.654 "bdev_lvol_start_shallow_copy", 00:04:41.654 "bdev_lvol_grow_lvstore", 00:04:41.654 "bdev_lvol_get_lvols", 00:04:41.654 "bdev_lvol_get_lvstores", 00:04:41.654 "bdev_lvol_delete", 00:04:41.654 "bdev_lvol_set_read_only", 00:04:41.654 "bdev_lvol_resize", 00:04:41.654 "bdev_lvol_decouple_parent", 00:04:41.654 "bdev_lvol_inflate", 00:04:41.654 "bdev_lvol_rename", 00:04:41.654 "bdev_lvol_clone_bdev", 00:04:41.654 "bdev_lvol_clone", 00:04:41.654 "bdev_lvol_snapshot", 00:04:41.654 "bdev_lvol_create", 00:04:41.654 "bdev_lvol_delete_lvstore", 00:04:41.654 "bdev_lvol_rename_lvstore", 00:04:41.654 "bdev_lvol_create_lvstore", 00:04:41.654 "bdev_raid_set_options", 00:04:41.654 "bdev_raid_remove_base_bdev", 00:04:41.654 "bdev_raid_add_base_bdev", 00:04:41.654 "bdev_raid_delete", 00:04:41.654 "bdev_raid_create", 00:04:41.654 "bdev_raid_get_bdevs", 00:04:41.654 "bdev_error_inject_error", 00:04:41.654 "bdev_error_delete", 00:04:41.654 "bdev_error_create", 00:04:41.654 "bdev_split_delete", 00:04:41.654 "bdev_split_create", 00:04:41.654 "bdev_delay_delete", 00:04:41.654 "bdev_delay_create", 00:04:41.654 "bdev_delay_update_latency", 00:04:41.654 "bdev_zone_block_delete", 00:04:41.654 "bdev_zone_block_create", 00:04:41.654 "blobfs_create", 00:04:41.654 "blobfs_detect", 00:04:41.654 "blobfs_set_cache_size", 00:04:41.654 "bdev_aio_delete", 00:04:41.654 "bdev_aio_rescan", 00:04:41.654 "bdev_aio_create", 00:04:41.654 "bdev_ftl_set_property", 00:04:41.654 "bdev_ftl_get_properties", 00:04:41.654 "bdev_ftl_get_stats", 00:04:41.654 "bdev_ftl_unmap", 00:04:41.654 "bdev_ftl_unload", 00:04:41.654 "bdev_ftl_delete", 00:04:41.654 "bdev_ftl_load", 00:04:41.654 "bdev_ftl_create", 00:04:41.654 "bdev_virtio_attach_controller", 00:04:41.654 "bdev_virtio_scsi_get_devices", 00:04:41.654 "bdev_virtio_detach_controller", 00:04:41.654 "bdev_virtio_blk_set_hotplug", 00:04:41.654 "bdev_iscsi_delete", 00:04:41.654 "bdev_iscsi_create", 00:04:41.654 "bdev_iscsi_set_options", 00:04:41.654 "bdev_uring_delete", 00:04:41.654 "bdev_uring_rescan", 00:04:41.654 "bdev_uring_create", 00:04:41.654 "accel_error_inject_error", 00:04:41.654 "ioat_scan_accel_module", 00:04:41.654 "dsa_scan_accel_module", 00:04:41.654 "iaa_scan_accel_module", 00:04:41.654 "keyring_file_remove_key", 00:04:41.654 "keyring_file_add_key", 00:04:41.654 "keyring_linux_set_options", 00:04:41.654 "fsdev_aio_delete", 00:04:41.654 "fsdev_aio_create", 00:04:41.654 "iscsi_get_histogram", 00:04:41.654 "iscsi_enable_histogram", 00:04:41.654 "iscsi_set_options", 00:04:41.654 "iscsi_get_auth_groups", 00:04:41.654 "iscsi_auth_group_remove_secret", 00:04:41.654 "iscsi_auth_group_add_secret", 00:04:41.654 "iscsi_delete_auth_group", 00:04:41.654 "iscsi_create_auth_group", 00:04:41.654 "iscsi_set_discovery_auth", 00:04:41.654 "iscsi_get_options", 00:04:41.654 "iscsi_target_node_request_logout", 00:04:41.654 "iscsi_target_node_set_redirect", 00:04:41.654 "iscsi_target_node_set_auth", 00:04:41.654 "iscsi_target_node_add_lun", 00:04:41.654 "iscsi_get_stats", 00:04:41.654 "iscsi_get_connections", 00:04:41.654 "iscsi_portal_group_set_auth", 00:04:41.654 "iscsi_start_portal_group", 00:04:41.654 "iscsi_delete_portal_group", 00:04:41.654 "iscsi_create_portal_group", 00:04:41.654 "iscsi_get_portal_groups", 00:04:41.654 "iscsi_delete_target_node", 00:04:41.654 "iscsi_target_node_remove_pg_ig_maps", 00:04:41.654 "iscsi_target_node_add_pg_ig_maps", 00:04:41.654 "iscsi_create_target_node", 00:04:41.654 "iscsi_get_target_nodes", 00:04:41.654 "iscsi_delete_initiator_group", 00:04:41.654 "iscsi_initiator_group_remove_initiators", 00:04:41.654 "iscsi_initiator_group_add_initiators", 00:04:41.654 "iscsi_create_initiator_group", 00:04:41.654 "iscsi_get_initiator_groups", 00:04:41.654 "nvmf_set_crdt", 00:04:41.654 "nvmf_set_config", 00:04:41.654 "nvmf_set_max_subsystems", 00:04:41.654 "nvmf_stop_mdns_prr", 00:04:41.654 "nvmf_publish_mdns_prr", 00:04:41.654 "nvmf_subsystem_get_listeners", 00:04:41.654 "nvmf_subsystem_get_qpairs", 00:04:41.654 "nvmf_subsystem_get_controllers", 00:04:41.654 "nvmf_get_stats", 00:04:41.654 "nvmf_get_transports", 00:04:41.654 "nvmf_create_transport", 00:04:41.654 "nvmf_get_targets", 00:04:41.654 "nvmf_delete_target", 00:04:41.654 "nvmf_create_target", 00:04:41.654 "nvmf_subsystem_allow_any_host", 00:04:41.654 "nvmf_subsystem_set_keys", 00:04:41.654 "nvmf_subsystem_remove_host", 00:04:41.654 "nvmf_subsystem_add_host", 00:04:41.654 "nvmf_ns_remove_host", 00:04:41.654 "nvmf_ns_add_host", 00:04:41.654 "nvmf_subsystem_remove_ns", 00:04:41.654 "nvmf_subsystem_set_ns_ana_group", 00:04:41.654 "nvmf_subsystem_add_ns", 00:04:41.654 "nvmf_subsystem_listener_set_ana_state", 00:04:41.654 "nvmf_discovery_get_referrals", 00:04:41.654 "nvmf_discovery_remove_referral", 00:04:41.654 "nvmf_discovery_add_referral", 00:04:41.654 "nvmf_subsystem_remove_listener", 00:04:41.654 "nvmf_subsystem_add_listener", 00:04:41.654 "nvmf_delete_subsystem", 00:04:41.654 "nvmf_create_subsystem", 00:04:41.654 "nvmf_get_subsystems", 00:04:41.654 "env_dpdk_get_mem_stats", 00:04:41.654 "nbd_get_disks", 00:04:41.654 "nbd_stop_disk", 00:04:41.654 "nbd_start_disk", 00:04:41.654 "ublk_recover_disk", 00:04:41.654 "ublk_get_disks", 00:04:41.654 "ublk_stop_disk", 00:04:41.654 "ublk_start_disk", 00:04:41.654 "ublk_destroy_target", 00:04:41.654 "ublk_create_target", 00:04:41.654 "virtio_blk_create_transport", 00:04:41.654 "virtio_blk_get_transports", 00:04:41.654 "vhost_controller_set_coalescing", 00:04:41.654 "vhost_get_controllers", 00:04:41.655 "vhost_delete_controller", 00:04:41.655 "vhost_create_blk_controller", 00:04:41.655 "vhost_scsi_controller_remove_target", 00:04:41.655 "vhost_scsi_controller_add_target", 00:04:41.655 "vhost_start_scsi_controller", 00:04:41.655 "vhost_create_scsi_controller", 00:04:41.655 "thread_set_cpumask", 00:04:41.655 "scheduler_set_options", 00:04:41.655 "framework_get_governor", 00:04:41.655 "framework_get_scheduler", 00:04:41.655 "framework_set_scheduler", 00:04:41.655 "framework_get_reactors", 00:04:41.655 "thread_get_io_channels", 00:04:41.655 "thread_get_pollers", 00:04:41.655 "thread_get_stats", 00:04:41.655 "framework_monitor_context_switch", 00:04:41.655 "spdk_kill_instance", 00:04:41.655 "log_enable_timestamps", 00:04:41.655 "log_get_flags", 00:04:41.655 "log_clear_flag", 00:04:41.655 "log_set_flag", 00:04:41.655 "log_get_level", 00:04:41.655 "log_set_level", 00:04:41.655 "log_get_print_level", 00:04:41.655 "log_set_print_level", 00:04:41.655 "framework_enable_cpumask_locks", 00:04:41.655 "framework_disable_cpumask_locks", 00:04:41.655 "framework_wait_init", 00:04:41.655 "framework_start_init", 00:04:41.655 "scsi_get_devices", 00:04:41.655 "bdev_get_histogram", 00:04:41.655 "bdev_enable_histogram", 00:04:41.655 "bdev_set_qos_limit", 00:04:41.655 "bdev_set_qd_sampling_period", 00:04:41.655 "bdev_get_bdevs", 00:04:41.655 "bdev_reset_iostat", 00:04:41.655 "bdev_get_iostat", 00:04:41.655 "bdev_examine", 00:04:41.655 "bdev_wait_for_examine", 00:04:41.655 "bdev_set_options", 00:04:41.655 "accel_get_stats", 00:04:41.655 "accel_set_options", 00:04:41.655 "accel_set_driver", 00:04:41.655 "accel_crypto_key_destroy", 00:04:41.655 "accel_crypto_keys_get", 00:04:41.655 "accel_crypto_key_create", 00:04:41.655 "accel_assign_opc", 00:04:41.655 "accel_get_module_info", 00:04:41.655 "accel_get_opc_assignments", 00:04:41.655 "vmd_rescan", 00:04:41.655 "vmd_remove_device", 00:04:41.655 "vmd_enable", 00:04:41.655 "sock_get_default_impl", 00:04:41.655 "sock_set_default_impl", 00:04:41.655 "sock_impl_set_options", 00:04:41.655 "sock_impl_get_options", 00:04:41.655 "iobuf_get_stats", 00:04:41.655 "iobuf_set_options", 00:04:41.655 "keyring_get_keys", 00:04:41.655 "framework_get_pci_devices", 00:04:41.655 "framework_get_config", 00:04:41.655 "framework_get_subsystems", 00:04:41.655 "fsdev_set_opts", 00:04:41.655 "fsdev_get_opts", 00:04:41.655 "trace_get_info", 00:04:41.655 "trace_get_tpoint_group_mask", 00:04:41.655 "trace_disable_tpoint_group", 00:04:41.655 "trace_enable_tpoint_group", 00:04:41.655 "trace_clear_tpoint_mask", 00:04:41.655 "trace_set_tpoint_mask", 00:04:41.655 "notify_get_notifications", 00:04:41.655 "notify_get_types", 00:04:41.655 "spdk_get_version", 00:04:41.655 "rpc_get_methods" 00:04:41.655 ] 00:04:41.655 10:47:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.655 10:47:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:41.655 10:47:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58039 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58039 ']' 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58039 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58039 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58039' 00:04:41.655 killing process with pid 58039 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58039 00:04:41.655 10:47:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58039 00:04:41.913 00:04:41.913 real 0m1.990s 00:04:41.913 user 0m3.636s 00:04:41.913 sys 0m0.471s 00:04:41.913 ************************************ 00:04:41.913 END TEST spdkcli_tcp 00:04:41.913 ************************************ 00:04:41.913 10:47:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.913 10:47:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.171 10:47:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.171 10:47:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.171 10:47:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.171 10:47:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.171 ************************************ 00:04:42.171 START TEST dpdk_mem_utility 00:04:42.171 ************************************ 00:04:42.171 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.171 * Looking for test storage... 00:04:42.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:42.171 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.171 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.171 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.171 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.171 10:47:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.172 10:47:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.172 --rc genhtml_branch_coverage=1 00:04:42.172 --rc genhtml_function_coverage=1 00:04:42.172 --rc genhtml_legend=1 00:04:42.172 --rc geninfo_all_blocks=1 00:04:42.172 --rc geninfo_unexecuted_blocks=1 00:04:42.172 00:04:42.172 ' 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.172 --rc genhtml_branch_coverage=1 00:04:42.172 --rc genhtml_function_coverage=1 00:04:42.172 --rc genhtml_legend=1 00:04:42.172 --rc geninfo_all_blocks=1 00:04:42.172 --rc geninfo_unexecuted_blocks=1 00:04:42.172 00:04:42.172 ' 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.172 --rc genhtml_branch_coverage=1 00:04:42.172 --rc genhtml_function_coverage=1 00:04:42.172 --rc genhtml_legend=1 00:04:42.172 --rc geninfo_all_blocks=1 00:04:42.172 --rc geninfo_unexecuted_blocks=1 00:04:42.172 00:04:42.172 ' 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.172 --rc genhtml_branch_coverage=1 00:04:42.172 --rc genhtml_function_coverage=1 00:04:42.172 --rc genhtml_legend=1 00:04:42.172 --rc geninfo_all_blocks=1 00:04:42.172 --rc geninfo_unexecuted_blocks=1 00:04:42.172 00:04:42.172 ' 00:04:42.172 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.172 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58138 00:04:42.172 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.172 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58138 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58138 ']' 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.172 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.431 [2024-12-09 10:47:35.401071] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:42.431 [2024-12-09 10:47:35.401254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58138 ] 00:04:42.431 [2024-12-09 10:47:35.537922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.431 [2024-12-09 10:47:35.595838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.690 [2024-12-09 10:47:35.660047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.690 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.690 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:42.690 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:42.690 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:42.690 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.690 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.950 { 00:04:42.950 "filename": "/tmp/spdk_mem_dump.txt" 00:04:42.950 } 00:04:42.950 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.950 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.950 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:42.950 1 heaps totaling size 818.000000 MiB 00:04:42.950 size: 818.000000 MiB heap id: 0 00:04:42.950 end heaps---------- 00:04:42.950 9 mempools totaling size 603.782043 MiB 00:04:42.950 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:42.950 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:42.950 size: 100.555481 MiB name: bdev_io_58138 00:04:42.950 size: 50.003479 MiB name: msgpool_58138 00:04:42.950 size: 36.509338 MiB name: fsdev_io_58138 00:04:42.950 size: 21.763794 MiB name: PDU_Pool 00:04:42.950 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:42.950 size: 4.133484 MiB name: evtpool_58138 00:04:42.950 size: 0.026123 MiB name: Session_Pool 00:04:42.950 end mempools------- 00:04:42.950 6 memzones totaling size 4.142822 MiB 00:04:42.950 size: 1.000366 MiB name: RG_ring_0_58138 00:04:42.950 size: 1.000366 MiB name: RG_ring_1_58138 00:04:42.950 size: 1.000366 MiB name: RG_ring_4_58138 00:04:42.950 size: 1.000366 MiB name: RG_ring_5_58138 00:04:42.950 size: 0.125366 MiB name: RG_ring_2_58138 00:04:42.950 size: 0.015991 MiB name: RG_ring_3_58138 00:04:42.950 end memzones------- 00:04:42.950 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:42.950 heap id: 0 total size: 818.000000 MiB number of busy elements: 319 number of free elements: 15 00:04:42.950 list of free elements. size: 10.802124 MiB 00:04:42.950 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:42.950 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:42.950 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:42.950 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:42.950 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:42.950 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:42.950 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:42.950 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:42.950 element at address: 0x20001ae00000 with size: 0.567322 MiB 00:04:42.950 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:42.950 element at address: 0x200000c00000 with size: 0.486267 MiB 00:04:42.950 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:42.950 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:42.950 element at address: 0x200028200000 with size: 0.395752 MiB 00:04:42.950 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:42.950 list of standard malloc elements. size: 199.268982 MiB 00:04:42.950 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:42.950 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:42.950 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:42.950 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:42.950 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:42.950 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:42.950 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:42.950 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:42.950 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:42.950 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:42.950 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:42.950 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:04:42.950 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:04:42.950 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:04:42.950 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:42.951 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:04:42.951 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:42.952 element at address: 0x200028265500 with size: 0.000183 MiB 00:04:42.952 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c480 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c540 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c600 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c780 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c840 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c900 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d080 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d140 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d200 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d380 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d440 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d500 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d680 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d740 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d800 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826d980 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826da40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826db00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826de00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826df80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e040 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e100 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e280 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e340 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e400 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e580 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e640 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e700 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e880 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826e940 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f000 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f180 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f240 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f300 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f480 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f540 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f600 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f780 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f840 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f900 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:42.952 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:42.952 list of memzone associated elements. size: 607.928894 MiB 00:04:42.952 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:42.952 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:42.952 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:42.952 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:42.952 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:42.952 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58138_0 00:04:42.952 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:42.952 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58138_0 00:04:42.952 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:42.952 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58138_0 00:04:42.952 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:42.952 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:42.952 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:42.952 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:42.952 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:42.952 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58138_0 00:04:42.952 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:42.952 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58138 00:04:42.952 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:42.952 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58138 00:04:42.952 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:42.952 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:42.952 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:42.952 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:42.952 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:42.952 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:42.952 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:42.952 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:42.952 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:42.952 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58138 00:04:42.952 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:42.952 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58138 00:04:42.952 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:42.952 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58138 00:04:42.952 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:42.952 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58138 00:04:42.952 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:42.952 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58138 00:04:42.952 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:42.952 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58138 00:04:42.952 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:42.952 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:42.952 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:42.952 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:42.952 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:42.952 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:42.952 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:42.952 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58138 00:04:42.952 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:42.952 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58138 00:04:42.952 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:42.952 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:42.952 element at address: 0x200028265680 with size: 0.023743 MiB 00:04:42.952 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:42.952 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:42.952 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58138 00:04:42.952 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:04:42.952 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:42.952 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:42.953 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58138 00:04:42.953 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:42.953 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58138 00:04:42.953 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:42.953 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58138 00:04:42.953 element at address: 0x20002826c280 with size: 0.000305 MiB 00:04:42.953 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:42.953 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:42.953 10:47:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58138 00:04:42.953 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58138 ']' 00:04:42.953 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58138 00:04:42.953 10:47:35 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58138 00:04:42.953 killing process with pid 58138 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58138' 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58138 00:04:42.953 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58138 00:04:43.519 ************************************ 00:04:43.519 END TEST dpdk_mem_utility 00:04:43.519 ************************************ 00:04:43.519 00:04:43.519 real 0m1.294s 00:04:43.519 user 0m1.202s 00:04:43.519 sys 0m0.431s 00:04:43.519 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.519 10:47:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.519 10:47:36 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.519 10:47:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.519 10:47:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.519 10:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:43.519 ************************************ 00:04:43.519 START TEST event 00:04:43.519 ************************************ 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:43.519 * Looking for test storage... 00:04:43.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.519 10:47:36 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.519 10:47:36 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.519 10:47:36 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.519 10:47:36 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.519 10:47:36 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.519 10:47:36 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.519 10:47:36 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.519 10:47:36 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.519 10:47:36 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.519 10:47:36 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.519 10:47:36 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.519 10:47:36 event -- scripts/common.sh@344 -- # case "$op" in 00:04:43.519 10:47:36 event -- scripts/common.sh@345 -- # : 1 00:04:43.519 10:47:36 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.519 10:47:36 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.519 10:47:36 event -- scripts/common.sh@365 -- # decimal 1 00:04:43.519 10:47:36 event -- scripts/common.sh@353 -- # local d=1 00:04:43.519 10:47:36 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.519 10:47:36 event -- scripts/common.sh@355 -- # echo 1 00:04:43.519 10:47:36 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.519 10:47:36 event -- scripts/common.sh@366 -- # decimal 2 00:04:43.519 10:47:36 event -- scripts/common.sh@353 -- # local d=2 00:04:43.519 10:47:36 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.519 10:47:36 event -- scripts/common.sh@355 -- # echo 2 00:04:43.519 10:47:36 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.519 10:47:36 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.519 10:47:36 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.519 10:47:36 event -- scripts/common.sh@368 -- # return 0 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.519 --rc genhtml_branch_coverage=1 00:04:43.519 --rc genhtml_function_coverage=1 00:04:43.519 --rc genhtml_legend=1 00:04:43.519 --rc geninfo_all_blocks=1 00:04:43.519 --rc geninfo_unexecuted_blocks=1 00:04:43.519 00:04:43.519 ' 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.519 --rc genhtml_branch_coverage=1 00:04:43.519 --rc genhtml_function_coverage=1 00:04:43.519 --rc genhtml_legend=1 00:04:43.519 --rc geninfo_all_blocks=1 00:04:43.519 --rc geninfo_unexecuted_blocks=1 00:04:43.519 00:04:43.519 ' 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.519 --rc genhtml_branch_coverage=1 00:04:43.519 --rc genhtml_function_coverage=1 00:04:43.519 --rc genhtml_legend=1 00:04:43.519 --rc geninfo_all_blocks=1 00:04:43.519 --rc geninfo_unexecuted_blocks=1 00:04:43.519 00:04:43.519 ' 00:04:43.519 10:47:36 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.519 --rc genhtml_branch_coverage=1 00:04:43.519 --rc genhtml_function_coverage=1 00:04:43.519 --rc genhtml_legend=1 00:04:43.519 --rc geninfo_all_blocks=1 00:04:43.519 --rc geninfo_unexecuted_blocks=1 00:04:43.520 00:04:43.520 ' 00:04:43.520 10:47:36 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:43.520 10:47:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:43.520 10:47:36 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.520 10:47:36 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:43.520 10:47:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.520 10:47:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.520 ************************************ 00:04:43.520 START TEST event_perf 00:04:43.520 ************************************ 00:04:43.520 10:47:36 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:43.777 Running I/O for 1 seconds...[2024-12-09 10:47:36.721499] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:43.777 [2024-12-09 10:47:36.721640] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58215 ] 00:04:43.777 [2024-12-09 10:47:36.877531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:43.777 [2024-12-09 10:47:36.940113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.777 [2024-12-09 10:47:36.940289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.777 [2024-12-09 10:47:36.940233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.777 Running I/O for 1 seconds...[2024-12-09 10:47:36.940291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.252 00:04:45.253 lcore 0: 177316 00:04:45.253 lcore 1: 177316 00:04:45.253 lcore 2: 177316 00:04:45.253 lcore 3: 177316 00:04:45.253 done. 00:04:45.253 00:04:45.253 real 0m1.345s 00:04:45.253 user 0m4.173s 00:04:45.253 sys 0m0.047s 00:04:45.253 10:47:38 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.253 10:47:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.253 ************************************ 00:04:45.253 END TEST event_perf 00:04:45.253 ************************************ 00:04:45.253 10:47:38 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:45.253 10:47:38 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:45.253 10:47:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.253 10:47:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.253 ************************************ 00:04:45.253 START TEST event_reactor 00:04:45.253 ************************************ 00:04:45.253 10:47:38 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:45.253 [2024-12-09 10:47:38.127138] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:45.253 [2024-12-09 10:47:38.127337] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58248 ] 00:04:45.253 [2024-12-09 10:47:38.267216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.253 [2024-12-09 10:47:38.337941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.630 test_start 00:04:46.630 oneshot 00:04:46.630 tick 100 00:04:46.630 tick 100 00:04:46.630 tick 250 00:04:46.630 tick 100 00:04:46.630 tick 100 00:04:46.630 tick 100 00:04:46.630 tick 250 00:04:46.630 tick 500 00:04:46.630 tick 100 00:04:46.630 tick 100 00:04:46.630 tick 250 00:04:46.630 tick 100 00:04:46.630 tick 100 00:04:46.630 test_end 00:04:46.630 ************************************ 00:04:46.630 END TEST event_reactor 00:04:46.630 ************************************ 00:04:46.630 00:04:46.630 real 0m1.327s 00:04:46.630 user 0m1.178s 00:04:46.630 sys 0m0.041s 00:04:46.630 10:47:39 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.630 10:47:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:46.630 10:47:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.630 10:47:39 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.630 10:47:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.630 10:47:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.630 ************************************ 00:04:46.630 START TEST event_reactor_perf 00:04:46.630 ************************************ 00:04:46.630 10:47:39 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:46.630 [2024-12-09 10:47:39.496652] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:46.630 [2024-12-09 10:47:39.496800] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58284 ] 00:04:46.630 [2024-12-09 10:47:39.654019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.630 [2024-12-09 10:47:39.711963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.005 test_start 00:04:48.005 test_end 00:04:48.005 Performance: 390559 events per second 00:04:48.005 ************************************ 00:04:48.005 END TEST event_reactor_perf 00:04:48.005 ************************************ 00:04:48.005 00:04:48.005 real 0m1.327s 00:04:48.005 user 0m1.179s 00:04:48.005 sys 0m0.041s 00:04:48.005 10:47:40 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.005 10:47:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.005 10:47:40 event -- event/event.sh@49 -- # uname -s 00:04:48.005 10:47:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:48.005 10:47:40 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:48.005 10:47:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.005 10:47:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.005 10:47:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.005 ************************************ 00:04:48.005 START TEST event_scheduler 00:04:48.005 ************************************ 00:04:48.005 10:47:40 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:48.005 * Looking for test storage... 00:04:48.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:48.005 10:47:40 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.005 10:47:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.005 10:47:40 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.005 10:47:41 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.005 10:47:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.006 10:47:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.006 --rc genhtml_branch_coverage=1 00:04:48.006 --rc genhtml_function_coverage=1 00:04:48.006 --rc genhtml_legend=1 00:04:48.006 --rc geninfo_all_blocks=1 00:04:48.006 --rc geninfo_unexecuted_blocks=1 00:04:48.006 00:04:48.006 ' 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.006 --rc genhtml_branch_coverage=1 00:04:48.006 --rc genhtml_function_coverage=1 00:04:48.006 --rc genhtml_legend=1 00:04:48.006 --rc geninfo_all_blocks=1 00:04:48.006 --rc geninfo_unexecuted_blocks=1 00:04:48.006 00:04:48.006 ' 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.006 --rc genhtml_branch_coverage=1 00:04:48.006 --rc genhtml_function_coverage=1 00:04:48.006 --rc genhtml_legend=1 00:04:48.006 --rc geninfo_all_blocks=1 00:04:48.006 --rc geninfo_unexecuted_blocks=1 00:04:48.006 00:04:48.006 ' 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.006 --rc genhtml_branch_coverage=1 00:04:48.006 --rc genhtml_function_coverage=1 00:04:48.006 --rc genhtml_legend=1 00:04:48.006 --rc geninfo_all_blocks=1 00:04:48.006 --rc geninfo_unexecuted_blocks=1 00:04:48.006 00:04:48.006 ' 00:04:48.006 10:47:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:48.006 10:47:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58353 00:04:48.006 10:47:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:48.006 10:47:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.006 10:47:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58353 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58353 ']' 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.006 10:47:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.006 [2024-12-09 10:47:41.168473] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:48.006 [2024-12-09 10:47:41.169162] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58353 ] 00:04:48.263 [2024-12-09 10:47:41.309463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.263 [2024-12-09 10:47:41.379673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.263 [2024-12-09 10:47:41.379877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.263 [2024-12-09 10:47:41.379780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.263 [2024-12-09 10:47:41.379882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:49.275 10:47:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.275 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.275 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.275 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.275 POWER: Cannot set governor of lcore 0 to performance 00:04:49.275 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.275 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.275 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:49.275 POWER: Cannot set governor of lcore 0 to userspace 00:04:49.275 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:49.275 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:49.275 POWER: Unable to set Power Management Environment for lcore 0 00:04:49.275 [2024-12-09 10:47:42.127722] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:49.275 [2024-12-09 10:47:42.127733] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:49.275 [2024-12-09 10:47:42.127766] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:49.275 [2024-12-09 10:47:42.127782] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:49.275 [2024-12-09 10:47:42.127788] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:49.275 [2024-12-09 10:47:42.127792] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.275 10:47:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.275 [2024-12-09 10:47:42.178783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.275 [2024-12-09 10:47:42.211690] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.275 10:47:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.275 10:47:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.275 ************************************ 00:04:49.275 START TEST scheduler_create_thread 00:04:49.275 ************************************ 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.275 2 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.275 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.275 3 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 4 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 5 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 6 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 7 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 8 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 9 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 10 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.276 10:47:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.654 10:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.654 10:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:50.654 10:47:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:50.654 10:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.654 10:47:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.026 ************************************ 00:04:52.026 END TEST scheduler_create_thread 00:04:52.026 ************************************ 00:04:52.026 10:47:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.026 00:04:52.026 real 0m2.618s 00:04:52.026 user 0m0.013s 00:04:52.026 sys 0m0.008s 00:04:52.026 10:47:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.026 10:47:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.027 10:47:44 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:52.027 10:47:44 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58353 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58353 ']' 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58353 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58353 00:04:52.027 killing process with pid 58353 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58353' 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58353 00:04:52.027 10:47:44 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58353 00:04:52.284 [2024-12-09 10:47:45.318152] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:52.543 00:04:52.543 real 0m4.678s 00:04:52.543 user 0m8.721s 00:04:52.543 sys 0m0.387s 00:04:52.543 10:47:45 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.543 10:47:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.543 ************************************ 00:04:52.543 END TEST event_scheduler 00:04:52.543 ************************************ 00:04:52.543 10:47:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.543 10:47:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.543 10:47:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.543 10:47:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.543 10:47:45 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.543 ************************************ 00:04:52.543 START TEST app_repeat 00:04:52.543 ************************************ 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.543 Process app_repeat pid: 58453 00:04:52.543 spdk_app_start Round 0 00:04:52.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58453 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58453' 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.543 10:47:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.543 10:47:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.543 [2024-12-09 10:47:45.642948] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:04:52.543 [2024-12-09 10:47:45.643058] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58453 ] 00:04:52.801 [2024-12-09 10:47:45.803115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.801 [2024-12-09 10:47:45.866014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.801 [2024-12-09 10:47:45.866023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.801 [2024-12-09 10:47:45.919725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:53.059 10:47:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.059 10:47:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.059 10:47:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.377 Malloc0 00:04:53.377 10:47:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.377 Malloc1 00:04:53.377 10:47:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.377 10:47:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.963 /dev/nbd0 00:04:53.963 10:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.963 10:47:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.963 1+0 records in 00:04:53.963 1+0 records out 00:04:53.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367673 s, 11.1 MB/s 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:53.963 10:47:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:53.963 10:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.963 10:47:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.963 10:47:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.221 /dev/nbd1 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.221 1+0 records in 00:04:54.221 1+0 records out 00:04:54.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390427 s, 10.5 MB/s 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.221 10:47:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.221 10:47:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.480 { 00:04:54.480 "nbd_device": "/dev/nbd0", 00:04:54.480 "bdev_name": "Malloc0" 00:04:54.480 }, 00:04:54.480 { 00:04:54.480 "nbd_device": "/dev/nbd1", 00:04:54.480 "bdev_name": "Malloc1" 00:04:54.480 } 00:04:54.480 ]' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.480 { 00:04:54.480 "nbd_device": "/dev/nbd0", 00:04:54.480 "bdev_name": "Malloc0" 00:04:54.480 }, 00:04:54.480 { 00:04:54.480 "nbd_device": "/dev/nbd1", 00:04:54.480 "bdev_name": "Malloc1" 00:04:54.480 } 00:04:54.480 ]' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.480 /dev/nbd1' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.480 /dev/nbd1' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.480 256+0 records in 00:04:54.480 256+0 records out 00:04:54.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153467 s, 68.3 MB/s 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.480 256+0 records in 00:04:54.480 256+0 records out 00:04:54.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241799 s, 43.4 MB/s 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.480 256+0 records in 00:04:54.480 256+0 records out 00:04:54.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250104 s, 41.9 MB/s 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.480 10:47:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.481 10:47:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.481 10:47:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.046 10:47:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.303 10:47:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.561 10:47:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.561 10:47:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.819 10:47:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.077 [2024-12-09 10:47:49.078845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.077 [2024-12-09 10:47:49.138074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.077 [2024-12-09 10:47:49.138075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.077 [2024-12-09 10:47:49.182395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.077 [2024-12-09 10:47:49.182483] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.077 [2024-12-09 10:47:49.182492] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.359 spdk_app_start Round 1 00:04:59.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.359 10:47:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.359 10:47:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.359 10:47:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:04:59.359 10:47:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:04:59.359 10:47:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.359 10:47:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.359 10:47:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.359 10:47:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.359 10:47:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.359 10:47:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.359 10:47:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.360 10:47:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.360 Malloc0 00:04:59.360 10:47:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.617 Malloc1 00:04:59.617 10:47:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.617 10:47:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.876 /dev/nbd0 00:04:59.876 10:47:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.876 10:47:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.876 1+0 records in 00:04:59.876 1+0 records out 00:04:59.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216985 s, 18.9 MB/s 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.876 10:47:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.876 10:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.876 10:47:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.876 10:47:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:00.134 /dev/nbd1 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:00.134 1+0 records in 00:05:00.134 1+0 records out 00:05:00.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280954 s, 14.6 MB/s 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:00.134 10:47:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.134 10:47:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.700 { 00:05:00.700 "nbd_device": "/dev/nbd0", 00:05:00.700 "bdev_name": "Malloc0" 00:05:00.700 }, 00:05:00.700 { 00:05:00.700 "nbd_device": "/dev/nbd1", 00:05:00.700 "bdev_name": "Malloc1" 00:05:00.700 } 00:05:00.700 ]' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.700 { 00:05:00.700 "nbd_device": "/dev/nbd0", 00:05:00.700 "bdev_name": "Malloc0" 00:05:00.700 }, 00:05:00.700 { 00:05:00.700 "nbd_device": "/dev/nbd1", 00:05:00.700 "bdev_name": "Malloc1" 00:05:00.700 } 00:05:00.700 ]' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.700 /dev/nbd1' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.700 /dev/nbd1' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.700 256+0 records in 00:05:00.700 256+0 records out 00:05:00.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122215 s, 85.8 MB/s 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.700 256+0 records in 00:05:00.700 256+0 records out 00:05:00.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215436 s, 48.7 MB/s 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.700 256+0 records in 00:05:00.700 256+0 records out 00:05:00.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226733 s, 46.2 MB/s 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.700 10:47:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.958 10:47:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.958 10:47:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:01.215 10:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:01.215 10:47:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:01.215 10:47:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:01.215 10:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.216 10:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.473 10:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.732 10:47:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.732 10:47:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.732 10:47:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.732 10:47:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.732 10:47:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.732 10:47:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.993 10:47:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.993 [2024-12-09 10:47:55.052104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.993 [2024-12-09 10:47:55.109800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.993 [2024-12-09 10:47:55.109801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.993 [2024-12-09 10:47:55.154236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:01.993 [2024-12-09 10:47:55.154309] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.993 [2024-12-09 10:47:55.154317] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:05.286 spdk_app_start Round 2 00:05:05.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.286 10:47:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.286 10:47:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:05.286 10:47:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:05:05.286 10:47:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:05:05.286 10:47:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.286 10:47:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.286 10:47:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.286 10:47:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.286 10:47:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.286 10:47:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.286 10:47:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:05.286 10:47:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.286 Malloc0 00:05:05.287 10:47:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.546 Malloc1 00:05:05.546 10:47:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.546 10:47:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.804 /dev/nbd0 00:05:05.804 10:47:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.804 10:47:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:05.804 10:47:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.804 1+0 records in 00:05:05.804 1+0 records out 00:05:05.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318999 s, 12.8 MB/s 00:05:05.805 10:47:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.805 10:47:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:05.805 10:47:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.805 10:47:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:05.805 10:47:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:05.805 10:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.805 10:47:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.805 10:47:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.063 /dev/nbd1 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.063 1+0 records in 00:05:06.063 1+0 records out 00:05:06.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371177 s, 11.0 MB/s 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:06.063 10:47:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.063 10:47:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.321 10:47:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.321 { 00:05:06.321 "nbd_device": "/dev/nbd0", 00:05:06.321 "bdev_name": "Malloc0" 00:05:06.321 }, 00:05:06.321 { 00:05:06.321 "nbd_device": "/dev/nbd1", 00:05:06.321 "bdev_name": "Malloc1" 00:05:06.321 } 00:05:06.321 ]' 00:05:06.321 10:47:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.321 { 00:05:06.321 "nbd_device": "/dev/nbd0", 00:05:06.321 "bdev_name": "Malloc0" 00:05:06.321 }, 00:05:06.321 { 00:05:06.321 "nbd_device": "/dev/nbd1", 00:05:06.321 "bdev_name": "Malloc1" 00:05:06.321 } 00:05:06.321 ]' 00:05:06.321 10:47:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.322 10:47:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.322 /dev/nbd1' 00:05:06.322 10:47:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.322 /dev/nbd1' 00:05:06.322 10:47:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.580 10:47:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.580 10:47:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.581 256+0 records in 00:05:06.581 256+0 records out 00:05:06.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139556 s, 75.1 MB/s 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.581 256+0 records in 00:05:06.581 256+0 records out 00:05:06.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021108 s, 49.7 MB/s 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.581 256+0 records in 00:05:06.581 256+0 records out 00:05:06.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213661 s, 49.1 MB/s 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.581 10:47:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.839 10:47:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.839 10:47:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.839 10:47:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.839 10:47:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.840 10:47:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.840 10:47:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.840 10:47:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.840 10:47:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.840 10:47:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.840 10:47:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.102 10:48:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.364 10:48:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.364 10:48:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.624 10:48:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.624 [2024-12-09 10:48:00.784009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.882 [2024-12-09 10:48:00.840479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.882 [2024-12-09 10:48:00.840481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.882 [2024-12-09 10:48:00.883798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.882 [2024-12-09 10:48:00.883872] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.882 [2024-12-09 10:48:00.883881] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.170 10:48:03 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58453 /var/tmp/spdk-nbd.sock 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58453 ']' 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:11.170 10:48:03 event.app_repeat -- event/event.sh@39 -- # killprocess 58453 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58453 ']' 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58453 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58453 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58453' 00:05:11.170 killing process with pid 58453 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58453 00:05:11.170 10:48:03 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58453 00:05:11.170 spdk_app_start is called in Round 0. 00:05:11.170 Shutdown signal received, stop current app iteration 00:05:11.170 Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 reinitialization... 00:05:11.170 spdk_app_start is called in Round 1. 00:05:11.170 Shutdown signal received, stop current app iteration 00:05:11.170 Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 reinitialization... 00:05:11.170 spdk_app_start is called in Round 2. 00:05:11.170 Shutdown signal received, stop current app iteration 00:05:11.170 Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 reinitialization... 00:05:11.170 spdk_app_start is called in Round 3. 00:05:11.170 Shutdown signal received, stop current app iteration 00:05:11.170 10:48:04 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.170 10:48:04 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.170 00:05:11.170 real 0m18.485s 00:05:11.170 user 0m41.677s 00:05:11.170 sys 0m3.028s 00:05:11.170 10:48:04 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.170 10:48:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.170 ************************************ 00:05:11.170 END TEST app_repeat 00:05:11.170 ************************************ 00:05:11.170 10:48:04 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.170 10:48:04 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.170 10:48:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.170 10:48:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.170 10:48:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.170 ************************************ 00:05:11.170 START TEST cpu_locks 00:05:11.170 ************************************ 00:05:11.170 10:48:04 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.170 * Looking for test storage... 00:05:11.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.170 10:48:04 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.170 10:48:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.170 10:48:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.429 10:48:04 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.429 --rc genhtml_branch_coverage=1 00:05:11.429 --rc genhtml_function_coverage=1 00:05:11.429 --rc genhtml_legend=1 00:05:11.429 --rc geninfo_all_blocks=1 00:05:11.429 --rc geninfo_unexecuted_blocks=1 00:05:11.429 00:05:11.429 ' 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.429 --rc genhtml_branch_coverage=1 00:05:11.429 --rc genhtml_function_coverage=1 00:05:11.429 --rc genhtml_legend=1 00:05:11.429 --rc geninfo_all_blocks=1 00:05:11.429 --rc geninfo_unexecuted_blocks=1 00:05:11.429 00:05:11.429 ' 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.429 --rc genhtml_branch_coverage=1 00:05:11.429 --rc genhtml_function_coverage=1 00:05:11.429 --rc genhtml_legend=1 00:05:11.429 --rc geninfo_all_blocks=1 00:05:11.429 --rc geninfo_unexecuted_blocks=1 00:05:11.429 00:05:11.429 ' 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.429 --rc genhtml_branch_coverage=1 00:05:11.429 --rc genhtml_function_coverage=1 00:05:11.429 --rc genhtml_legend=1 00:05:11.429 --rc geninfo_all_blocks=1 00:05:11.429 --rc geninfo_unexecuted_blocks=1 00:05:11.429 00:05:11.429 ' 00:05:11.429 10:48:04 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.429 10:48:04 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.429 10:48:04 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.429 10:48:04 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.429 10:48:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.429 ************************************ 00:05:11.429 START TEST default_locks 00:05:11.429 ************************************ 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58886 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58886 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58886 ']' 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.429 10:48:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.429 [2024-12-09 10:48:04.487973] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:11.429 [2024-12-09 10:48:04.488180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:05:11.688 [2024-12-09 10:48:04.649793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.688 [2024-12-09 10:48:04.704590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.688 [2024-12-09 10:48:04.762094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:12.252 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.252 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:12.252 10:48:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58886 00:05:12.252 10:48:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58886 00:05:12.252 10:48:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58886 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58886 ']' 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58886 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58886 00:05:12.818 killing process with pid 58886 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58886' 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58886 00:05:12.818 10:48:05 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58886 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58886 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58886 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58886 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58886 ']' 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.077 ERROR: process (pid: 58886) is no longer running 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.077 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58886) - No such process 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.077 00:05:13.077 real 0m1.814s 00:05:13.077 user 0m1.931s 00:05:13.077 sys 0m0.530s 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.077 10:48:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.077 ************************************ 00:05:13.077 END TEST default_locks 00:05:13.077 ************************************ 00:05:13.336 10:48:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.336 10:48:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.336 10:48:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.336 10:48:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.336 ************************************ 00:05:13.336 START TEST default_locks_via_rpc 00:05:13.336 ************************************ 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58938 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58938 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58938 ']' 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.336 10:48:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.336 [2024-12-09 10:48:06.349653] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:13.336 [2024-12-09 10:48:06.349837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:05:13.336 [2024-12-09 10:48:06.502508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.647 [2024-12-09 10:48:06.553851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.647 [2024-12-09 10:48:06.612725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58938 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58938 00:05:14.214 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58938 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58938 ']' 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58938 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58938 00:05:14.472 killing process with pid 58938 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58938' 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58938 00:05:14.472 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58938 00:05:15.039 00:05:15.039 real 0m1.714s 00:05:15.039 user 0m1.889s 00:05:15.039 sys 0m0.467s 00:05:15.039 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.040 10:48:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.040 ************************************ 00:05:15.040 END TEST default_locks_via_rpc 00:05:15.040 ************************************ 00:05:15.040 10:48:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:15.040 10:48:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.040 10:48:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.040 10:48:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.040 ************************************ 00:05:15.040 START TEST non_locking_app_on_locked_coremask 00:05:15.040 ************************************ 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58988 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58988 /var/tmp/spdk.sock 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58988 ']' 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.040 10:48:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.040 [2024-12-09 10:48:08.116403] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:15.040 [2024-12-09 10:48:08.116510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58988 ] 00:05:15.298 [2024-12-09 10:48:08.272876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.298 [2024-12-09 10:48:08.328732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.298 [2024-12-09 10:48:08.386873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59004 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59004 /var/tmp/spdk2.sock 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59004 ']' 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.233 10:48:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.233 [2024-12-09 10:48:09.123259] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:16.233 [2024-12-09 10:48:09.123434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59004 ] 00:05:16.233 [2024-12-09 10:48:09.278512] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.233 [2024-12-09 10:48:09.278567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.233 [2024-12-09 10:48:09.397145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.491 [2024-12-09 10:48:09.524584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.059 10:48:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.059 10:48:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.059 10:48:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58988 00:05:17.059 10:48:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58988 00:05:17.059 10:48:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.995 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58988 00:05:17.995 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58988 ']' 00:05:17.995 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58988 00:05:17.995 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58988 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.996 killing process with pid 58988 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58988' 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58988 00:05:17.996 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58988 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59004 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59004 ']' 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59004 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59004 00:05:18.933 killing process with pid 59004 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59004' 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59004 00:05:18.933 10:48:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59004 00:05:19.192 ************************************ 00:05:19.192 END TEST non_locking_app_on_locked_coremask 00:05:19.192 ************************************ 00:05:19.192 00:05:19.192 real 0m4.138s 00:05:19.192 user 0m4.631s 00:05:19.192 sys 0m1.119s 00:05:19.192 10:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.192 10:48:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.192 10:48:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:19.192 10:48:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.192 10:48:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.192 10:48:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.192 ************************************ 00:05:19.192 START TEST locking_app_on_unlocked_coremask 00:05:19.192 ************************************ 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59072 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59072 /var/tmp/spdk.sock 00:05:19.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59072 ']' 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.192 10:48:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.192 [2024-12-09 10:48:12.301615] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:19.192 [2024-12-09 10:48:12.301705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59072 ] 00:05:19.452 [2024-12-09 10:48:12.452712] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.453 [2024-12-09 10:48:12.452781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.453 [2024-12-09 10:48:12.507930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.453 [2024-12-09 10:48:12.563928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59084 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59084 /var/tmp/spdk2.sock 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59084 ']' 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.392 10:48:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.392 [2024-12-09 10:48:13.241676] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:20.392 [2024-12-09 10:48:13.241758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:05:20.392 [2024-12-09 10:48:13.389082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.392 [2024-12-09 10:48:13.503587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.651 [2024-12-09 10:48:13.620061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:21.217 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.217 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.217 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59084 00:05:21.217 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59084 00:05:21.217 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.476 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59072 00:05:21.476 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59072 ']' 00:05:21.476 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59072 00:05:21.476 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:21.476 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.735 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59072 00:05:21.735 killing process with pid 59072 00:05:21.735 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.735 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.735 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59072' 00:05:21.735 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59072 00:05:21.735 10:48:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59072 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59084 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59084 ']' 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59084 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59084 00:05:22.304 killing process with pid 59084 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59084' 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59084 00:05:22.304 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59084 00:05:22.873 ************************************ 00:05:22.873 00:05:22.873 real 0m3.529s 00:05:22.873 user 0m3.882s 00:05:22.873 sys 0m0.915s 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.873 END TEST locking_app_on_unlocked_coremask 00:05:22.873 ************************************ 00:05:22.873 10:48:15 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:22.873 10:48:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.873 10:48:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.873 10:48:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.873 ************************************ 00:05:22.873 START TEST locking_app_on_locked_coremask 00:05:22.873 ************************************ 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59145 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59145 /var/tmp/spdk.sock 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59145 ']' 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.873 10:48:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.873 [2024-12-09 10:48:15.894502] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:22.873 [2024-12-09 10:48:15.894636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:05:22.873 [2024-12-09 10:48:16.048837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.132 [2024-12-09 10:48:16.103765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.132 [2024-12-09 10:48:16.160413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59161 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59161 /var/tmp/spdk2.sock 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59161 /var/tmp/spdk2.sock 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59161 /var/tmp/spdk2.sock 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59161 ']' 00:05:23.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.697 10:48:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.697 [2024-12-09 10:48:16.825781] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:23.697 [2024-12-09 10:48:16.825853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59161 ] 00:05:23.955 [2024-12-09 10:48:16.972079] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59145 has claimed it. 00:05:23.955 [2024-12-09 10:48:16.972136] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.522 ERROR: process (pid: 59161) is no longer running 00:05:24.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59161) - No such process 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59145 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.522 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59145 00:05:24.781 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59145 00:05:25.040 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59145 ']' 00:05:25.040 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59145 00:05:25.040 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.040 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.040 10:48:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59145 00:05:25.040 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.040 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.040 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59145' 00:05:25.040 killing process with pid 59145 00:05:25.040 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59145 00:05:25.040 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59145 00:05:25.300 00:05:25.300 real 0m2.520s 00:05:25.300 user 0m2.814s 00:05:25.300 sys 0m0.626s 00:05:25.300 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.300 10:48:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.300 ************************************ 00:05:25.300 END TEST locking_app_on_locked_coremask 00:05:25.300 ************************************ 00:05:25.300 10:48:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:25.300 10:48:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.300 10:48:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.300 10:48:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.300 ************************************ 00:05:25.300 START TEST locking_overlapped_coremask 00:05:25.300 ************************************ 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59207 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59207 /var/tmp/spdk.sock 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59207 ']' 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.300 10:48:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.559 [2024-12-09 10:48:18.481051] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:25.559 [2024-12-09 10:48:18.481122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59207 ] 00:05:25.559 [2024-12-09 10:48:18.613869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.559 [2024-12-09 10:48:18.672343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.559 [2024-12-09 10:48:18.672528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.559 [2024-12-09 10:48:18.672531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.559 [2024-12-09 10:48:18.733834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59225 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59225 /var/tmp/spdk2.sock 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59225 /var/tmp/spdk2.sock 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59225 /var/tmp/spdk2.sock 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59225 ']' 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.497 10:48:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.497 [2024-12-09 10:48:19.406695] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:26.497 [2024-12-09 10:48:19.406845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59225 ] 00:05:26.497 [2024-12-09 10:48:19.559947] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59207 has claimed it. 00:05:26.497 [2024-12-09 10:48:19.560030] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.066 ERROR: process (pid: 59225) is no longer running 00:05:27.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59225) - No such process 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59207 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59207 ']' 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59207 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59207 00:05:27.066 killing process with pid 59207 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59207' 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59207 00:05:27.066 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59207 00:05:27.634 00:05:27.634 real 0m2.087s 00:05:27.634 user 0m5.798s 00:05:27.634 sys 0m0.379s 00:05:27.634 ************************************ 00:05:27.634 END TEST locking_overlapped_coremask 00:05:27.635 ************************************ 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.635 10:48:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:27.635 10:48:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.635 10:48:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.635 10:48:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.635 ************************************ 00:05:27.635 START TEST locking_overlapped_coremask_via_rpc 00:05:27.635 ************************************ 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59265 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59265 ']' 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.635 10:48:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.635 [2024-12-09 10:48:20.636628] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:27.635 [2024-12-09 10:48:20.636710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:05:27.635 [2024-12-09 10:48:20.790255] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.635 [2024-12-09 10:48:20.790415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.893 [2024-12-09 10:48:20.848837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.893 [2024-12-09 10:48:20.849045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.893 [2024-12-09 10:48:20.849047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.893 [2024-12-09 10:48:20.907393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59283 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59283 /var/tmp/spdk2.sock 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59283 ']' 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.461 10:48:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.461 [2024-12-09 10:48:21.599472] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:28.461 [2024-12-09 10:48:21.599628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59283 ] 00:05:28.721 [2024-12-09 10:48:21.754327] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.721 [2024-12-09 10:48:21.754376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.721 [2024-12-09 10:48:21.876782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.721 [2024-12-09 10:48:21.876843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.721 [2024-12-09 10:48:21.876844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:28.979 [2024-12-09 10:48:21.999877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.547 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.547 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.548 [2024-12-09 10:48:22.555883] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59265 has claimed it. 00:05:29.548 request: 00:05:29.548 { 00:05:29.548 "method": "framework_enable_cpumask_locks", 00:05:29.548 "req_id": 1 00:05:29.548 } 00:05:29.548 Got JSON-RPC error response 00:05:29.548 response: 00:05:29.548 { 00:05:29.548 "code": -32603, 00:05:29.548 "message": "Failed to claim CPU core: 2" 00:05:29.548 } 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59265 /var/tmp/spdk.sock 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59265 ']' 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.548 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59283 /var/tmp/spdk2.sock 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59283 ']' 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.806 10:48:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.063 00:05:30.063 real 0m2.509s 00:05:30.063 user 0m1.283s 00:05:30.063 sys 0m0.159s 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.063 10:48:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.063 ************************************ 00:05:30.063 END TEST locking_overlapped_coremask_via_rpc 00:05:30.063 ************************************ 00:05:30.063 10:48:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.063 10:48:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59265 ]] 00:05:30.063 10:48:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59265 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59265 ']' 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59265 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59265 00:05:30.063 killing process with pid 59265 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59265' 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59265 00:05:30.063 10:48:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59265 00:05:30.629 10:48:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59283 ]] 00:05:30.629 10:48:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59283 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59283 ']' 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59283 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59283 00:05:30.629 killing process with pid 59283 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59283' 00:05:30.629 10:48:23 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59283 00:05:30.630 10:48:23 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59283 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59265 ]] 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59265 00:05:30.888 Process with pid 59265 is not found 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59265 ']' 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59265 00:05:30.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59265) - No such process 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59265 is not found' 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59283 ]] 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59283 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59283 ']' 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59283 00:05:30.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59283) - No such process 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59283 is not found' 00:05:30.888 Process with pid 59283 is not found 00:05:30.888 10:48:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:30.888 00:05:30.888 real 0m19.817s 00:05:30.888 user 0m34.637s 00:05:30.888 sys 0m5.122s 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.888 10:48:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 ************************************ 00:05:30.888 END TEST cpu_locks 00:05:30.888 ************************************ 00:05:30.888 00:05:30.888 real 0m47.566s 00:05:30.888 user 1m31.790s 00:05:30.888 sys 0m9.045s 00:05:30.888 10:48:24 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.888 10:48:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.888 ************************************ 00:05:30.888 END TEST event 00:05:30.888 ************************************ 00:05:31.146 10:48:24 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:31.146 10:48:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.146 10:48:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.146 10:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.146 ************************************ 00:05:31.146 START TEST thread 00:05:31.146 ************************************ 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:31.146 * Looking for test storage... 00:05:31.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.146 10:48:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.146 10:48:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.146 10:48:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.146 10:48:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.146 10:48:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.146 10:48:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.146 10:48:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.146 10:48:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.146 10:48:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.146 10:48:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.146 10:48:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.146 10:48:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:31.146 10:48:24 thread -- scripts/common.sh@345 -- # : 1 00:05:31.146 10:48:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.146 10:48:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.146 10:48:24 thread -- scripts/common.sh@365 -- # decimal 1 00:05:31.146 10:48:24 thread -- scripts/common.sh@353 -- # local d=1 00:05:31.146 10:48:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.146 10:48:24 thread -- scripts/common.sh@355 -- # echo 1 00:05:31.146 10:48:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.146 10:48:24 thread -- scripts/common.sh@366 -- # decimal 2 00:05:31.146 10:48:24 thread -- scripts/common.sh@353 -- # local d=2 00:05:31.146 10:48:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.146 10:48:24 thread -- scripts/common.sh@355 -- # echo 2 00:05:31.146 10:48:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.146 10:48:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.146 10:48:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.146 10:48:24 thread -- scripts/common.sh@368 -- # return 0 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.146 --rc genhtml_branch_coverage=1 00:05:31.146 --rc genhtml_function_coverage=1 00:05:31.146 --rc genhtml_legend=1 00:05:31.146 --rc geninfo_all_blocks=1 00:05:31.146 --rc geninfo_unexecuted_blocks=1 00:05:31.146 00:05:31.146 ' 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.146 --rc genhtml_branch_coverage=1 00:05:31.146 --rc genhtml_function_coverage=1 00:05:31.146 --rc genhtml_legend=1 00:05:31.146 --rc geninfo_all_blocks=1 00:05:31.146 --rc geninfo_unexecuted_blocks=1 00:05:31.146 00:05:31.146 ' 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.146 --rc genhtml_branch_coverage=1 00:05:31.146 --rc genhtml_function_coverage=1 00:05:31.146 --rc genhtml_legend=1 00:05:31.146 --rc geninfo_all_blocks=1 00:05:31.146 --rc geninfo_unexecuted_blocks=1 00:05:31.146 00:05:31.146 ' 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.146 --rc genhtml_branch_coverage=1 00:05:31.146 --rc genhtml_function_coverage=1 00:05:31.146 --rc genhtml_legend=1 00:05:31.146 --rc geninfo_all_blocks=1 00:05:31.146 --rc geninfo_unexecuted_blocks=1 00:05:31.146 00:05:31.146 ' 00:05:31.146 10:48:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.146 10:48:24 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.146 ************************************ 00:05:31.146 START TEST thread_poller_perf 00:05:31.146 ************************************ 00:05:31.146 10:48:24 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.404 [2024-12-09 10:48:24.337861] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:31.404 [2024-12-09 10:48:24.337982] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59413 ] 00:05:31.404 [2024-12-09 10:48:24.497452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.404 [2024-12-09 10:48:24.552853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.404 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:32.780 [2024-12-09T10:48:25.959Z] ====================================== 00:05:32.780 [2024-12-09T10:48:25.959Z] busy:2298623988 (cyc) 00:05:32.780 [2024-12-09T10:48:25.959Z] total_run_count: 363000 00:05:32.780 [2024-12-09T10:48:25.959Z] tsc_hz: 2290000000 (cyc) 00:05:32.780 [2024-12-09T10:48:25.959Z] ====================================== 00:05:32.780 [2024-12-09T10:48:25.959Z] poller_cost: 6332 (cyc), 2765 (nsec) 00:05:32.780 00:05:32.780 real 0m1.330s 00:05:32.780 user 0m1.180s 00:05:32.780 sys 0m0.044s 00:05:32.780 10:48:25 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.780 10:48:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.780 ************************************ 00:05:32.780 END TEST thread_poller_perf 00:05:32.780 ************************************ 00:05:32.780 10:48:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.780 10:48:25 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:32.780 10:48:25 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.780 10:48:25 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.780 ************************************ 00:05:32.780 START TEST thread_poller_perf 00:05:32.780 ************************************ 00:05:32.780 10:48:25 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:32.780 [2024-12-09 10:48:25.728055] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:32.780 [2024-12-09 10:48:25.728256] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59449 ] 00:05:32.780 [2024-12-09 10:48:25.880619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.780 [2024-12-09 10:48:25.938280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.780 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:34.216 [2024-12-09T10:48:27.395Z] ====================================== 00:05:34.216 [2024-12-09T10:48:27.395Z] busy:2291833196 (cyc) 00:05:34.216 [2024-12-09T10:48:27.395Z] total_run_count: 4114000 00:05:34.216 [2024-12-09T10:48:27.395Z] tsc_hz: 2290000000 (cyc) 00:05:34.216 [2024-12-09T10:48:27.395Z] ====================================== 00:05:34.216 [2024-12-09T10:48:27.395Z] poller_cost: 557 (cyc), 243 (nsec) 00:05:34.216 00:05:34.216 real 0m1.321s 00:05:34.216 user 0m1.179s 00:05:34.216 sys 0m0.036s 00:05:34.216 10:48:27 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.216 10:48:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.216 ************************************ 00:05:34.216 END TEST thread_poller_perf 00:05:34.216 ************************************ 00:05:34.216 10:48:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:34.216 00:05:34.216 real 0m2.987s 00:05:34.216 user 0m2.524s 00:05:34.216 sys 0m0.265s 00:05:34.216 10:48:27 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.216 10:48:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.216 ************************************ 00:05:34.216 END TEST thread 00:05:34.216 ************************************ 00:05:34.216 10:48:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:34.216 10:48:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:34.216 10:48:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.216 10:48:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.216 10:48:27 -- common/autotest_common.sh@10 -- # set +x 00:05:34.216 ************************************ 00:05:34.216 START TEST app_cmdline 00:05:34.216 ************************************ 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:34.216 * Looking for test storage... 00:05:34.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.216 10:48:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.216 --rc genhtml_branch_coverage=1 00:05:34.216 --rc genhtml_function_coverage=1 00:05:34.216 --rc genhtml_legend=1 00:05:34.216 --rc geninfo_all_blocks=1 00:05:34.216 --rc geninfo_unexecuted_blocks=1 00:05:34.216 00:05:34.216 ' 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.216 --rc genhtml_branch_coverage=1 00:05:34.216 --rc genhtml_function_coverage=1 00:05:34.216 --rc genhtml_legend=1 00:05:34.216 --rc geninfo_all_blocks=1 00:05:34.216 --rc geninfo_unexecuted_blocks=1 00:05:34.216 00:05:34.216 ' 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.216 --rc genhtml_branch_coverage=1 00:05:34.216 --rc genhtml_function_coverage=1 00:05:34.216 --rc genhtml_legend=1 00:05:34.216 --rc geninfo_all_blocks=1 00:05:34.216 --rc geninfo_unexecuted_blocks=1 00:05:34.216 00:05:34.216 ' 00:05:34.216 10:48:27 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.217 --rc genhtml_branch_coverage=1 00:05:34.217 --rc genhtml_function_coverage=1 00:05:34.217 --rc genhtml_legend=1 00:05:34.217 --rc geninfo_all_blocks=1 00:05:34.217 --rc geninfo_unexecuted_blocks=1 00:05:34.217 00:05:34.217 ' 00:05:34.217 10:48:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:34.217 10:48:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59531 00:05:34.217 10:48:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:34.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.217 10:48:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59531 00:05:34.217 10:48:27 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59531 ']' 00:05:34.217 10:48:27 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.217 10:48:27 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.217 10:48:27 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.217 10:48:27 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.217 10:48:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:34.475 [2024-12-09 10:48:27.444023] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:34.475 [2024-12-09 10:48:27.444524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59531 ] 00:05:34.475 [2024-12-09 10:48:27.579682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.475 [2024-12-09 10:48:27.637095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.733 [2024-12-09 10:48:27.708408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:34.733 10:48:27 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.733 10:48:27 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:34.733 10:48:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:34.991 { 00:05:34.991 "version": "SPDK v25.01-pre git sha1 25cdf096c", 00:05:34.991 "fields": { 00:05:34.991 "major": 25, 00:05:34.991 "minor": 1, 00:05:34.991 "patch": 0, 00:05:34.991 "suffix": "-pre", 00:05:34.991 "commit": "25cdf096c" 00:05:34.991 } 00:05:34.991 } 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:34.991 10:48:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:34.991 10:48:28 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.991 10:48:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:34.991 10:48:28 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.249 10:48:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:35.249 10:48:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:35.249 10:48:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:35.250 10:48:28 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:35.508 request: 00:05:35.508 { 00:05:35.508 "method": "env_dpdk_get_mem_stats", 00:05:35.508 "req_id": 1 00:05:35.508 } 00:05:35.508 Got JSON-RPC error response 00:05:35.508 response: 00:05:35.508 { 00:05:35.508 "code": -32601, 00:05:35.508 "message": "Method not found" 00:05:35.508 } 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:35.508 10:48:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59531 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59531 ']' 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59531 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59531 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59531' 00:05:35.508 killing process with pid 59531 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@973 -- # kill 59531 00:05:35.508 10:48:28 app_cmdline -- common/autotest_common.sh@978 -- # wait 59531 00:05:35.767 00:05:35.767 real 0m1.713s 00:05:35.767 user 0m2.027s 00:05:35.767 sys 0m0.473s 00:05:35.767 10:48:28 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.767 10:48:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:35.767 ************************************ 00:05:35.767 END TEST app_cmdline 00:05:35.767 ************************************ 00:05:35.767 10:48:28 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:35.767 10:48:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.767 10:48:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.767 10:48:28 -- common/autotest_common.sh@10 -- # set +x 00:05:35.767 ************************************ 00:05:35.767 START TEST version 00:05:35.767 ************************************ 00:05:35.767 10:48:28 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:36.026 * Looking for test storage... 00:05:36.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.026 10:48:29 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.026 10:48:29 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.026 10:48:29 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.026 10:48:29 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.026 10:48:29 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.026 10:48:29 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.026 10:48:29 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.026 10:48:29 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.026 10:48:29 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.026 10:48:29 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.026 10:48:29 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.026 10:48:29 version -- scripts/common.sh@344 -- # case "$op" in 00:05:36.026 10:48:29 version -- scripts/common.sh@345 -- # : 1 00:05:36.026 10:48:29 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.026 10:48:29 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.026 10:48:29 version -- scripts/common.sh@365 -- # decimal 1 00:05:36.026 10:48:29 version -- scripts/common.sh@353 -- # local d=1 00:05:36.026 10:48:29 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.026 10:48:29 version -- scripts/common.sh@355 -- # echo 1 00:05:36.026 10:48:29 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.026 10:48:29 version -- scripts/common.sh@366 -- # decimal 2 00:05:36.026 10:48:29 version -- scripts/common.sh@353 -- # local d=2 00:05:36.026 10:48:29 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.026 10:48:29 version -- scripts/common.sh@355 -- # echo 2 00:05:36.026 10:48:29 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.026 10:48:29 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.026 10:48:29 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.026 10:48:29 version -- scripts/common.sh@368 -- # return 0 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.026 --rc genhtml_branch_coverage=1 00:05:36.026 --rc genhtml_function_coverage=1 00:05:36.026 --rc genhtml_legend=1 00:05:36.026 --rc geninfo_all_blocks=1 00:05:36.026 --rc geninfo_unexecuted_blocks=1 00:05:36.026 00:05:36.026 ' 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.026 --rc genhtml_branch_coverage=1 00:05:36.026 --rc genhtml_function_coverage=1 00:05:36.026 --rc genhtml_legend=1 00:05:36.026 --rc geninfo_all_blocks=1 00:05:36.026 --rc geninfo_unexecuted_blocks=1 00:05:36.026 00:05:36.026 ' 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.026 --rc genhtml_branch_coverage=1 00:05:36.026 --rc genhtml_function_coverage=1 00:05:36.026 --rc genhtml_legend=1 00:05:36.026 --rc geninfo_all_blocks=1 00:05:36.026 --rc geninfo_unexecuted_blocks=1 00:05:36.026 00:05:36.026 ' 00:05:36.026 10:48:29 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.026 --rc genhtml_branch_coverage=1 00:05:36.026 --rc genhtml_function_coverage=1 00:05:36.026 --rc genhtml_legend=1 00:05:36.026 --rc geninfo_all_blocks=1 00:05:36.026 --rc geninfo_unexecuted_blocks=1 00:05:36.026 00:05:36.026 ' 00:05:36.026 10:48:29 version -- app/version.sh@17 -- # get_header_version major 00:05:36.026 10:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:36.026 10:48:29 version -- app/version.sh@17 -- # major=25 00:05:36.026 10:48:29 version -- app/version.sh@18 -- # get_header_version minor 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:36.026 10:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:36.026 10:48:29 version -- app/version.sh@18 -- # minor=1 00:05:36.026 10:48:29 version -- app/version.sh@19 -- # get_header_version patch 00:05:36.026 10:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:36.026 10:48:29 version -- app/version.sh@19 -- # patch=0 00:05:36.026 10:48:29 version -- app/version.sh@20 -- # get_header_version suffix 00:05:36.026 10:48:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # cut -f2 00:05:36.026 10:48:29 version -- app/version.sh@14 -- # tr -d '"' 00:05:36.285 10:48:29 version -- app/version.sh@20 -- # suffix=-pre 00:05:36.285 10:48:29 version -- app/version.sh@22 -- # version=25.1 00:05:36.285 10:48:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:36.285 10:48:29 version -- app/version.sh@28 -- # version=25.1rc0 00:05:36.286 10:48:29 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:36.286 10:48:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:36.286 10:48:29 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:36.286 10:48:29 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:36.286 00:05:36.286 real 0m0.342s 00:05:36.286 user 0m0.210s 00:05:36.286 sys 0m0.185s 00:05:36.286 ************************************ 00:05:36.286 END TEST version 00:05:36.286 ************************************ 00:05:36.286 10:48:29 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.286 10:48:29 version -- common/autotest_common.sh@10 -- # set +x 00:05:36.286 10:48:29 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:36.286 10:48:29 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:36.286 10:48:29 -- spdk/autotest.sh@194 -- # uname -s 00:05:36.286 10:48:29 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:36.286 10:48:29 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:36.286 10:48:29 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:36.286 10:48:29 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:36.286 10:48:29 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:36.286 10:48:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.286 10:48:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.286 10:48:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.286 ************************************ 00:05:36.286 START TEST spdk_dd 00:05:36.286 ************************************ 00:05:36.286 10:48:29 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:36.286 * Looking for test storage... 00:05:36.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:36.286 10:48:29 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.286 10:48:29 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.286 10:48:29 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.544 10:48:29 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.544 10:48:29 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:36.545 10:48:29 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.545 10:48:29 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.545 --rc genhtml_branch_coverage=1 00:05:36.545 --rc genhtml_function_coverage=1 00:05:36.545 --rc genhtml_legend=1 00:05:36.545 --rc geninfo_all_blocks=1 00:05:36.545 --rc geninfo_unexecuted_blocks=1 00:05:36.545 00:05:36.545 ' 00:05:36.545 10:48:29 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.545 --rc genhtml_branch_coverage=1 00:05:36.545 --rc genhtml_function_coverage=1 00:05:36.545 --rc genhtml_legend=1 00:05:36.545 --rc geninfo_all_blocks=1 00:05:36.545 --rc geninfo_unexecuted_blocks=1 00:05:36.545 00:05:36.545 ' 00:05:36.545 10:48:29 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.545 --rc genhtml_branch_coverage=1 00:05:36.545 --rc genhtml_function_coverage=1 00:05:36.545 --rc genhtml_legend=1 00:05:36.545 --rc geninfo_all_blocks=1 00:05:36.545 --rc geninfo_unexecuted_blocks=1 00:05:36.545 00:05:36.545 ' 00:05:36.545 10:48:29 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.545 --rc genhtml_branch_coverage=1 00:05:36.545 --rc genhtml_function_coverage=1 00:05:36.545 --rc genhtml_legend=1 00:05:36.545 --rc geninfo_all_blocks=1 00:05:36.545 --rc geninfo_unexecuted_blocks=1 00:05:36.545 00:05:36.545 ' 00:05:36.545 10:48:29 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:36.545 10:48:29 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:36.545 10:48:29 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.545 10:48:29 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.545 10:48:29 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.545 10:48:29 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:36.545 10:48:29 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:36.545 10:48:29 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.063 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.063 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.063 10:48:30 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:37.063 10:48:30 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:37.063 10:48:30 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:37.063 10:48:30 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:05:37.063 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:37.064 * spdk_dd linked to liburing 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:37.064 10:48:30 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:37.064 10:48:30 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:37.065 10:48:30 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:37.065 10:48:30 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:37.065 10:48:30 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:37.065 10:48:30 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:37.065 10:48:30 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:37.065 10:48:30 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:37.065 10:48:30 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:37.065 10:48:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.065 10:48:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.065 10:48:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:37.065 ************************************ 00:05:37.065 START TEST spdk_dd_basic_rw 00:05:37.065 ************************************ 00:05:37.065 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:37.324 * Looking for test storage... 00:05:37.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.324 --rc genhtml_branch_coverage=1 00:05:37.324 --rc genhtml_function_coverage=1 00:05:37.324 --rc genhtml_legend=1 00:05:37.324 --rc geninfo_all_blocks=1 00:05:37.324 --rc geninfo_unexecuted_blocks=1 00:05:37.324 00:05:37.324 ' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.324 --rc genhtml_branch_coverage=1 00:05:37.324 --rc genhtml_function_coverage=1 00:05:37.324 --rc genhtml_legend=1 00:05:37.324 --rc geninfo_all_blocks=1 00:05:37.324 --rc geninfo_unexecuted_blocks=1 00:05:37.324 00:05:37.324 ' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.324 --rc genhtml_branch_coverage=1 00:05:37.324 --rc genhtml_function_coverage=1 00:05:37.324 --rc genhtml_legend=1 00:05:37.324 --rc geninfo_all_blocks=1 00:05:37.324 --rc geninfo_unexecuted_blocks=1 00:05:37.324 00:05:37.324 ' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.324 --rc genhtml_branch_coverage=1 00:05:37.324 --rc genhtml_function_coverage=1 00:05:37.324 --rc genhtml_legend=1 00:05:37.324 --rc geninfo_all_blocks=1 00:05:37.324 --rc geninfo_unexecuted_blocks=1 00:05:37.324 00:05:37.324 ' 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:37.324 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:37.637 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:37.637 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:37.638 ************************************ 00:05:37.638 START TEST dd_bs_lt_native_bs 00:05:37.638 ************************************ 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:37.638 10:48:30 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:37.638 { 00:05:37.638 "subsystems": [ 00:05:37.638 { 00:05:37.638 "subsystem": "bdev", 00:05:37.638 "config": [ 00:05:37.638 { 00:05:37.638 "params": { 00:05:37.638 "trtype": "pcie", 00:05:37.638 "traddr": "0000:00:10.0", 00:05:37.638 "name": "Nvme0" 00:05:37.638 }, 00:05:37.638 "method": "bdev_nvme_attach_controller" 00:05:37.638 }, 00:05:37.638 { 00:05:37.638 "method": "bdev_wait_for_examine" 00:05:37.638 } 00:05:37.638 ] 00:05:37.638 } 00:05:37.638 ] 00:05:37.638 } 00:05:37.638 [2024-12-09 10:48:30.763491] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:37.638 [2024-12-09 10:48:30.763567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:05:37.910 [2024-12-09 10:48:30.916579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.910 [2024-12-09 10:48:30.973623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.910 [2024-12-09 10:48:31.016692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.169 [2024-12-09 10:48:31.121393] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:38.170 [2024-12-09 10:48:31.121461] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:38.170 [2024-12-09 10:48:31.228005] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:05:38.170 ************************************ 00:05:38.170 END TEST dd_bs_lt_native_bs 00:05:38.170 ************************************ 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.170 00:05:38.170 real 0m0.643s 00:05:38.170 user 0m0.454s 00:05:38.170 sys 0m0.142s 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.170 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.429 ************************************ 00:05:38.429 START TEST dd_rw 00:05:38.429 ************************************ 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:38.429 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:38.997 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:38.997 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:38.997 10:48:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 [2024-12-09 10:48:31.984700] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:38.997 [2024-12-09 10:48:31.984925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59912 ] 00:05:38.997 { 00:05:38.997 "subsystems": [ 00:05:38.997 { 00:05:38.997 "subsystem": "bdev", 00:05:38.997 "config": [ 00:05:38.997 { 00:05:38.997 "params": { 00:05:38.997 "trtype": "pcie", 00:05:38.997 "traddr": "0000:00:10.0", 00:05:38.997 "name": "Nvme0" 00:05:38.997 }, 00:05:38.997 "method": "bdev_nvme_attach_controller" 00:05:38.997 }, 00:05:38.997 { 00:05:38.997 "method": "bdev_wait_for_examine" 00:05:38.997 } 00:05:38.997 ] 00:05:38.997 } 00:05:38.997 ] 00:05:38.997 } 00:05:38.997 [2024-12-09 10:48:32.139659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.255 [2024-12-09 10:48:32.197733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.255 [2024-12-09 10:48:32.241192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:39.255  [2024-12-09T10:48:32.692Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:39.513 00:05:39.513 10:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:39.513 10:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:39.513 10:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:39.513 10:48:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:39.513 { 00:05:39.513 "subsystems": [ 00:05:39.513 { 00:05:39.513 "subsystem": "bdev", 00:05:39.513 "config": [ 00:05:39.513 { 00:05:39.513 "params": { 00:05:39.513 "trtype": "pcie", 00:05:39.513 "traddr": "0000:00:10.0", 00:05:39.513 "name": "Nvme0" 00:05:39.513 }, 00:05:39.513 "method": "bdev_nvme_attach_controller" 00:05:39.513 }, 00:05:39.513 { 00:05:39.513 "method": "bdev_wait_for_examine" 00:05:39.513 } 00:05:39.513 ] 00:05:39.513 } 00:05:39.513 ] 00:05:39.513 } 00:05:39.513 [2024-12-09 10:48:32.620842] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:39.513 [2024-12-09 10:48:32.621338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59925 ] 00:05:39.771 [2024-12-09 10:48:32.772986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.771 [2024-12-09 10:48:32.830538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.771 [2024-12-09 10:48:32.873662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.031  [2024-12-09T10:48:33.210Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:40.031 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:40.031 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:40.291 [2024-12-09 10:48:33.260781] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:40.291 [2024-12-09 10:48:33.260947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:05:40.291 { 00:05:40.291 "subsystems": [ 00:05:40.291 { 00:05:40.291 "subsystem": "bdev", 00:05:40.291 "config": [ 00:05:40.291 { 00:05:40.291 "params": { 00:05:40.291 "trtype": "pcie", 00:05:40.291 "traddr": "0000:00:10.0", 00:05:40.291 "name": "Nvme0" 00:05:40.291 }, 00:05:40.291 "method": "bdev_nvme_attach_controller" 00:05:40.291 }, 00:05:40.291 { 00:05:40.291 "method": "bdev_wait_for_examine" 00:05:40.291 } 00:05:40.291 ] 00:05:40.291 } 00:05:40.291 ] 00:05:40.291 } 00:05:40.291 [2024-12-09 10:48:33.411986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.550 [2024-12-09 10:48:33.469665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.550 [2024-12-09 10:48:33.512230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.550  [2024-12-09T10:48:33.989Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:40.810 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:40.810 10:48:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.377 10:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:41.377 10:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:41.377 10:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.377 10:48:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.377 { 00:05:41.377 "subsystems": [ 00:05:41.377 { 00:05:41.377 "subsystem": "bdev", 00:05:41.378 "config": [ 00:05:41.378 { 00:05:41.378 "params": { 00:05:41.378 "trtype": "pcie", 00:05:41.378 "traddr": "0000:00:10.0", 00:05:41.378 "name": "Nvme0" 00:05:41.378 }, 00:05:41.378 "method": "bdev_nvme_attach_controller" 00:05:41.378 }, 00:05:41.378 { 00:05:41.378 "method": "bdev_wait_for_examine" 00:05:41.378 } 00:05:41.378 ] 00:05:41.378 } 00:05:41.378 ] 00:05:41.378 } 00:05:41.378 [2024-12-09 10:48:34.433310] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:41.378 [2024-12-09 10:48:34.433387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59960 ] 00:05:41.636 [2024-12-09 10:48:34.577860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.636 [2024-12-09 10:48:34.636436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.636 [2024-12-09 10:48:34.681777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.636  [2024-12-09T10:48:35.074Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:41.895 00:05:41.895 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:41.895 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:41.895 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:41.895 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:41.895 [2024-12-09 10:48:35.062537] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:41.895 [2024-12-09 10:48:35.063093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59979 ] 00:05:41.895 { 00:05:41.895 "subsystems": [ 00:05:41.895 { 00:05:41.895 "subsystem": "bdev", 00:05:41.895 "config": [ 00:05:41.895 { 00:05:41.895 "params": { 00:05:41.895 "trtype": "pcie", 00:05:41.895 "traddr": "0000:00:10.0", 00:05:41.895 "name": "Nvme0" 00:05:41.895 }, 00:05:41.896 "method": "bdev_nvme_attach_controller" 00:05:41.896 }, 00:05:41.896 { 00:05:41.896 "method": "bdev_wait_for_examine" 00:05:41.896 } 00:05:41.896 ] 00:05:41.896 } 00:05:41.896 ] 00:05:41.896 } 00:05:42.153 [2024-12-09 10:48:35.214693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.153 [2024-12-09 10:48:35.273668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.153 [2024-12-09 10:48:35.318228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.412  [2024-12-09T10:48:35.848Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:42.669 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:42.669 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:42.670 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:42.670 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:42.670 10:48:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:42.670 { 00:05:42.670 "subsystems": [ 00:05:42.670 { 00:05:42.670 "subsystem": "bdev", 00:05:42.670 "config": [ 00:05:42.670 { 00:05:42.670 "params": { 00:05:42.670 "trtype": "pcie", 00:05:42.670 "traddr": "0000:00:10.0", 00:05:42.670 "name": "Nvme0" 00:05:42.670 }, 00:05:42.670 "method": "bdev_nvme_attach_controller" 00:05:42.670 }, 00:05:42.670 { 00:05:42.670 "method": "bdev_wait_for_examine" 00:05:42.670 } 00:05:42.670 ] 00:05:42.670 } 00:05:42.670 ] 00:05:42.670 } 00:05:42.670 [2024-12-09 10:48:35.703202] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:42.670 [2024-12-09 10:48:35.703285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:05:42.928 [2024-12-09 10:48:35.859661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.928 [2024-12-09 10:48:35.915274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.928 [2024-12-09 10:48:35.958861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.928  [2024-12-09T10:48:36.374Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:43.195 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:43.195 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.762 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:43.762 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:43.762 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:43.762 10:48:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:43.762 [2024-12-09 10:48:36.794406] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:43.763 [2024-12-09 10:48:36.794570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60019 ] 00:05:43.763 { 00:05:43.763 "subsystems": [ 00:05:43.763 { 00:05:43.763 "subsystem": "bdev", 00:05:43.763 "config": [ 00:05:43.763 { 00:05:43.763 "params": { 00:05:43.763 "trtype": "pcie", 00:05:43.763 "traddr": "0000:00:10.0", 00:05:43.763 "name": "Nvme0" 00:05:43.763 }, 00:05:43.763 "method": "bdev_nvme_attach_controller" 00:05:43.763 }, 00:05:43.763 { 00:05:43.763 "method": "bdev_wait_for_examine" 00:05:43.763 } 00:05:43.763 ] 00:05:43.763 } 00:05:43.763 ] 00:05:43.763 } 00:05:43.763 [2024-12-09 10:48:36.931562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.021 [2024-12-09 10:48:36.998829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.021 [2024-12-09 10:48:37.042057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.021  [2024-12-09T10:48:37.459Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:44.280 00:05:44.280 10:48:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:44.280 10:48:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:44.280 10:48:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:44.280 10:48:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:44.280 { 00:05:44.280 "subsystems": [ 00:05:44.280 { 00:05:44.280 "subsystem": "bdev", 00:05:44.280 "config": [ 00:05:44.280 { 00:05:44.280 "params": { 00:05:44.280 "trtype": "pcie", 00:05:44.280 "traddr": "0000:00:10.0", 00:05:44.280 "name": "Nvme0" 00:05:44.280 }, 00:05:44.280 "method": "bdev_nvme_attach_controller" 00:05:44.280 }, 00:05:44.280 { 00:05:44.280 "method": "bdev_wait_for_examine" 00:05:44.280 } 00:05:44.280 ] 00:05:44.280 } 00:05:44.280 ] 00:05:44.280 } 00:05:44.280 [2024-12-09 10:48:37.430985] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:44.280 [2024-12-09 10:48:37.431159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60027 ] 00:05:44.539 [2024-12-09 10:48:37.584575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.539 [2024-12-09 10:48:37.642710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.539 [2024-12-09 10:48:37.685483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.797  [2024-12-09T10:48:38.235Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:45.056 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:45.056 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:45.056 [2024-12-09 10:48:38.058217] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:45.056 [2024-12-09 10:48:38.058397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60048 ] 00:05:45.056 { 00:05:45.056 "subsystems": [ 00:05:45.056 { 00:05:45.056 "subsystem": "bdev", 00:05:45.056 "config": [ 00:05:45.056 { 00:05:45.056 "params": { 00:05:45.056 "trtype": "pcie", 00:05:45.056 "traddr": "0000:00:10.0", 00:05:45.056 "name": "Nvme0" 00:05:45.056 }, 00:05:45.056 "method": "bdev_nvme_attach_controller" 00:05:45.056 }, 00:05:45.056 { 00:05:45.056 "method": "bdev_wait_for_examine" 00:05:45.056 } 00:05:45.056 ] 00:05:45.056 } 00:05:45.056 ] 00:05:45.056 } 00:05:45.056 [2024-12-09 10:48:38.211973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.315 [2024-12-09 10:48:38.269225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.316 [2024-12-09 10:48:38.312411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:45.316  [2024-12-09T10:48:38.753Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:45.575 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:45.575 10:48:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.143 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:46.143 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:46.143 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.143 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.143 { 00:05:46.143 "subsystems": [ 00:05:46.143 { 00:05:46.143 "subsystem": "bdev", 00:05:46.143 "config": [ 00:05:46.143 { 00:05:46.143 "params": { 00:05:46.143 "trtype": "pcie", 00:05:46.143 "traddr": "0000:00:10.0", 00:05:46.143 "name": "Nvme0" 00:05:46.143 }, 00:05:46.143 "method": "bdev_nvme_attach_controller" 00:05:46.143 }, 00:05:46.143 { 00:05:46.143 "method": "bdev_wait_for_examine" 00:05:46.143 } 00:05:46.143 ] 00:05:46.143 } 00:05:46.143 ] 00:05:46.143 } 00:05:46.143 [2024-12-09 10:48:39.235633] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:46.143 [2024-12-09 10:48:39.235708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60067 ] 00:05:46.402 [2024-12-09 10:48:39.387010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.402 [2024-12-09 10:48:39.445033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.402 [2024-12-09 10:48:39.489813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:46.660  [2024-12-09T10:48:39.839Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:46.660 00:05:46.660 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:46.660 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:46.660 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:46.660 10:48:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:46.919 { 00:05:46.919 "subsystems": [ 00:05:46.919 { 00:05:46.919 "subsystem": "bdev", 00:05:46.919 "config": [ 00:05:46.919 { 00:05:46.919 "params": { 00:05:46.919 "trtype": "pcie", 00:05:46.919 "traddr": "0000:00:10.0", 00:05:46.919 "name": "Nvme0" 00:05:46.919 }, 00:05:46.919 "method": "bdev_nvme_attach_controller" 00:05:46.919 }, 00:05:46.919 { 00:05:46.919 "method": "bdev_wait_for_examine" 00:05:46.919 } 00:05:46.919 ] 00:05:46.919 } 00:05:46.919 ] 00:05:46.919 } 00:05:46.919 [2024-12-09 10:48:39.860209] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:46.919 [2024-12-09 10:48:39.860337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60080 ] 00:05:46.919 [2024-12-09 10:48:40.013628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.919 [2024-12-09 10:48:40.069257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.178 [2024-12-09 10:48:40.111117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.178  [2024-12-09T10:48:40.616Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:47.437 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.437 10:48:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.437 [2024-12-09 10:48:40.482322] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:47.437 [2024-12-09 10:48:40.482469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60096 ] 00:05:47.437 { 00:05:47.437 "subsystems": [ 00:05:47.437 { 00:05:47.437 "subsystem": "bdev", 00:05:47.437 "config": [ 00:05:47.437 { 00:05:47.437 "params": { 00:05:47.437 "trtype": "pcie", 00:05:47.437 "traddr": "0000:00:10.0", 00:05:47.437 "name": "Nvme0" 00:05:47.437 }, 00:05:47.437 "method": "bdev_nvme_attach_controller" 00:05:47.437 }, 00:05:47.437 { 00:05:47.437 "method": "bdev_wait_for_examine" 00:05:47.437 } 00:05:47.437 ] 00:05:47.437 } 00:05:47.437 ] 00:05:47.437 } 00:05:47.696 [2024-12-09 10:48:40.634645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.696 [2024-12-09 10:48:40.690421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.697 [2024-12-09 10:48:40.732157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.697  [2024-12-09T10:48:41.135Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:47.956 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:47.956 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.525 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:48.525 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:48.525 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.525 10:48:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.525 [2024-12-09 10:48:41.486548] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:48.525 [2024-12-09 10:48:41.486631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60117 ] 00:05:48.525 { 00:05:48.525 "subsystems": [ 00:05:48.525 { 00:05:48.525 "subsystem": "bdev", 00:05:48.525 "config": [ 00:05:48.525 { 00:05:48.525 "params": { 00:05:48.525 "trtype": "pcie", 00:05:48.525 "traddr": "0000:00:10.0", 00:05:48.525 "name": "Nvme0" 00:05:48.525 }, 00:05:48.525 "method": "bdev_nvme_attach_controller" 00:05:48.525 }, 00:05:48.525 { 00:05:48.525 "method": "bdev_wait_for_examine" 00:05:48.525 } 00:05:48.525 ] 00:05:48.525 } 00:05:48.525 ] 00:05:48.525 } 00:05:48.525 [2024-12-09 10:48:41.638742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.525 [2024-12-09 10:48:41.695473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.784 [2024-12-09 10:48:41.739231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.784  [2024-12-09T10:48:42.222Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:49.043 00:05:49.044 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:49.044 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:49.044 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.044 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.044 { 00:05:49.044 "subsystems": [ 00:05:49.044 { 00:05:49.044 "subsystem": "bdev", 00:05:49.044 "config": [ 00:05:49.044 { 00:05:49.044 "params": { 00:05:49.044 "trtype": "pcie", 00:05:49.044 "traddr": "0000:00:10.0", 00:05:49.044 "name": "Nvme0" 00:05:49.044 }, 00:05:49.044 "method": "bdev_nvme_attach_controller" 00:05:49.044 }, 00:05:49.044 { 00:05:49.044 "method": "bdev_wait_for_examine" 00:05:49.044 } 00:05:49.044 ] 00:05:49.044 } 00:05:49.044 ] 00:05:49.044 } 00:05:49.044 [2024-12-09 10:48:42.110816] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:49.044 [2024-12-09 10:48:42.110887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60136 ] 00:05:49.303 [2024-12-09 10:48:42.265073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.303 [2024-12-09 10:48:42.321696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.303 [2024-12-09 10:48:42.364265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.303  [2024-12-09T10:48:42.741Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:49.562 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:49.562 10:48:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.821 [2024-12-09 10:48:42.745399] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:49.821 [2024-12-09 10:48:42.746128] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60146 ] 00:05:49.821 { 00:05:49.821 "subsystems": [ 00:05:49.821 { 00:05:49.821 "subsystem": "bdev", 00:05:49.821 "config": [ 00:05:49.821 { 00:05:49.821 "params": { 00:05:49.821 "trtype": "pcie", 00:05:49.821 "traddr": "0000:00:10.0", 00:05:49.821 "name": "Nvme0" 00:05:49.821 }, 00:05:49.821 "method": "bdev_nvme_attach_controller" 00:05:49.821 }, 00:05:49.821 { 00:05:49.821 "method": "bdev_wait_for_examine" 00:05:49.821 } 00:05:49.821 ] 00:05:49.821 } 00:05:49.821 ] 00:05:49.821 } 00:05:49.821 [2024-12-09 10:48:42.898015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.821 [2024-12-09 10:48:42.955701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.821 [2024-12-09 10:48:42.998422] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.129  [2024-12-09T10:48:43.566Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:50.387 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:50.387 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.645 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:50.645 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:50.645 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.645 10:48:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.645 [2024-12-09 10:48:43.764931] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:50.645 [2024-12-09 10:48:43.765123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60165 ] 00:05:50.645 { 00:05:50.645 "subsystems": [ 00:05:50.645 { 00:05:50.645 "subsystem": "bdev", 00:05:50.645 "config": [ 00:05:50.645 { 00:05:50.645 "params": { 00:05:50.645 "trtype": "pcie", 00:05:50.645 "traddr": "0000:00:10.0", 00:05:50.645 "name": "Nvme0" 00:05:50.645 }, 00:05:50.645 "method": "bdev_nvme_attach_controller" 00:05:50.645 }, 00:05:50.645 { 00:05:50.645 "method": "bdev_wait_for_examine" 00:05:50.645 } 00:05:50.645 ] 00:05:50.645 } 00:05:50.645 ] 00:05:50.645 } 00:05:50.903 [2024-12-09 10:48:43.921230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.903 [2024-12-09 10:48:43.981053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.903 [2024-12-09 10:48:44.023216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.162  [2024-12-09T10:48:44.341Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:51.162 00:05:51.419 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:51.419 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:51.419 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.419 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.419 [2024-12-09 10:48:44.401928] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:51.419 [2024-12-09 10:48:44.402146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60184 ] 00:05:51.419 { 00:05:51.419 "subsystems": [ 00:05:51.419 { 00:05:51.419 "subsystem": "bdev", 00:05:51.419 "config": [ 00:05:51.419 { 00:05:51.419 "params": { 00:05:51.419 "trtype": "pcie", 00:05:51.419 "traddr": "0000:00:10.0", 00:05:51.419 "name": "Nvme0" 00:05:51.419 }, 00:05:51.419 "method": "bdev_nvme_attach_controller" 00:05:51.419 }, 00:05:51.419 { 00:05:51.419 "method": "bdev_wait_for_examine" 00:05:51.419 } 00:05:51.419 ] 00:05:51.419 } 00:05:51.419 ] 00:05:51.419 } 00:05:51.419 [2024-12-09 10:48:44.540272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.677 [2024-12-09 10:48:44.611952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.677 [2024-12-09 10:48:44.656356] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.677  [2024-12-09T10:48:45.115Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:51.936 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.936 10:48:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.936 [2024-12-09 10:48:45.045496] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:51.936 [2024-12-09 10:48:45.045971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60203 ] 00:05:51.936 { 00:05:51.936 "subsystems": [ 00:05:51.936 { 00:05:51.936 "subsystem": "bdev", 00:05:51.936 "config": [ 00:05:51.936 { 00:05:51.936 "params": { 00:05:51.936 "trtype": "pcie", 00:05:51.936 "traddr": "0000:00:10.0", 00:05:51.936 "name": "Nvme0" 00:05:51.936 }, 00:05:51.936 "method": "bdev_nvme_attach_controller" 00:05:51.936 }, 00:05:51.936 { 00:05:51.936 "method": "bdev_wait_for_examine" 00:05:51.936 } 00:05:51.936 ] 00:05:51.936 } 00:05:51.936 ] 00:05:51.936 } 00:05:52.200 [2024-12-09 10:48:45.197206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.200 [2024-12-09 10:48:45.257226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.200 [2024-12-09 10:48:45.302301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.462  [2024-12-09T10:48:45.641Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:52.462 00:05:52.462 ************************************ 00:05:52.462 END TEST dd_rw 00:05:52.462 ************************************ 00:05:52.462 00:05:52.462 real 0m14.211s 00:05:52.462 user 0m10.600s 00:05:52.462 sys 0m4.754s 00:05:52.462 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.462 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.721 ************************************ 00:05:52.721 START TEST dd_rw_offset 00:05:52.721 ************************************ 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=d2yh4hmtm9ntkrk3y1qehyq1k3sc3snsynn5qg27slzggybq9s43j8i57f0zrtgaykafs5mybw59wwj4cbzcabol3d70zzgymoub8vx25y9npgxfj5ed2giqay7ea4yw29meidoi6iwz0oemmagxfcykckedzmciu8k1h91ersh7zlud66847srpz2vmipt527jtpl99t1eu92vk3u4ctw7tayhl7m2ghmju9l4ct2ni4toaj41o6n9uruds8n47inh9ikyuututqyw9eky393hynp3auvfo6k3ek2vfyk1g27i4rvo7mbdr4llg7ug3d9cw904xm8tjvisfo28kc84ukqrluttp57empdxll7v0ljrwiebzn7p2f7eh2ur6ccnbbxufvsk18lx7o1rd4b08bx0p1rxwdsfiiooywr6t5iknhavl5f0oh9n8ospxk1y80z25tiplyj8cg1ar81krzpzdmfxkwz4h75pgzsk5ax7etyj8f91vplbqanfff92yufd99lrwpuuwde1iyknuzugg0ugxhltg5o2n1b885ri2s5xzye993r07ymhptol1yrfvbk51hb0o6vhw2nz1yw2lbk5nmbkayxea0pneu37rl32xcc9gsk47s92p5ya1qb0regi4bb463ev9pwfnuyqrh55o94wze69af5jqpywrceh08xr298cdz3dordwaseb8it2hkaj3770je02szdf2wtgg2hyr8w4vuz3fmr65wlubkf7rruu8szyi2jrdchdq2wdyov5oa6f6hksdvlbkzgaxcvs5c1enhd9ifcavvgnk00gf9zggaofdd8i42kqlmdomj6v6o0cesqjp17475lfr1op3q60q4b7bi0zfale9a6hoi4z7p7p7aztqmz1quir0smeo7y3atn0tmdurk15r6c46x5uj089hqpb70878vihs12v8plc15p57juhf8yctv5t9t32nu3dc1n335ue2up0826vyosjoexftf50vh3jaqpybn1dir0laxqa00cexxuw7kap8u95nb7ckammu7dzfz680881ha2aqg5le9ul8y9ag4wlnqfg479nk8dax04p8d68gt6fc6iln33gznjo7nl5p8nra05691s2dup02ciuhzry9uifzhv2xjd17wg46rx6linn2c32pof3w7wamxcmqq5yyq0g2h2boga71emg4ognbgs9nklp4grxbttbpnvs97bv9s58uccielaie3sw0d3vp3cwokmepnccmabxtbk6sg5rguftr34nrgvbeh0j7tjsijskkh5787yksbs3njn3xheyth1hv4009akzn478ns3a6460jiychxds16nhgvg4jr79guech5i33oig3a29lug30qt1d0yftmq4yng3ywlabkd5z3g27fkbsen1mfpz4lwr740spq4uphp8oxg2c19zxs7vo75ewpd4m5oqw1scpdqowyjctq6mz07wtna546k32dx0mov65v46luuqy20ah2p21j9p9aa5pwk5ebnaf4xfzmavjwv6o7ej7ot1va5ivw1zuyidq85gqz0zlhp16q3ks19ost6k6kx284oh918tx1d2n7zay9uopi2f77qh56rej3l5wz1sewasca9u74ccccn6fz8p0bucq8rnqw8r193qfc4hrr7bitn3tnepr1n27tek22w7v7mdojmezutwksf8fxs92i1h7c7iyd19u0vpuh343qu8y9038h8qvxv226hljc0dzqnrftaq4k84hfvf1quvx5yqw16tg2botqhz3pf621nou531akq9yburtxkwbxr1sfgwktpa4vtywyag9s5su619o9x6ekrqi57cqbrzjlkh2y891ace5yow7pocwt4zfjekonl3hvtrw1qo7f85l7rwopnccsvkqa85572u44qt7bhh4i76d4ijgoza9wassfzq9yc0ktuh0abhw1fq5j45eiwborakw1u5zcwgljkapqbu6fgk92h8gfhmr69mu4hphxgzg3xb59lvpq8x21p2q3ujtx3imkhq596r260s3h6ispqnt0r0l3yuzmr6yc7a9b6mbfx9atk65jmh37xk20jb6qgysdvo1n8fqtu0j0kj0lcu6z23cwyub23od4j492z9bgdvp69eyu08gt87jbcakk0q251hzzk1im7dr86h7inm6k22h6fyale8l1z57cmr4oajmwp4ux1rzjcf5hr3wze8w2hpqi7fazoaujxaffoyz2fw0ruuydr0dijg8n85vz9eyfqo4wdg34jr6npxxvmpt95jv14jbnq7rq5ulpimn0ztii5pvmd65tctrsvr8sji0er3wnx271ltd2jtubidgp3uze3h35r7e7uati6b59lwojjdh05vvvefn0ridw0197qmfev7wi9z8vzy4bwdpadkqsjvevilc07lqkae9qld6gar291g32hvqckhvnw04gdgjbfi9ojdx3bhsnt4aqmxew9ple8baysbd7t1hfkdy4wqes8rgywds40g1eg0j4ylqlmrnek3ugahlawrh8tab0kapvr1id42vtyavif2iytcttycshn5q7hqestkf0gc15evx8abpjx5flmc05r7eq9ttln3c9799ieccy5l8ufv5470zdhmwggx7dmxs3imww3zcpvyn9x9mxhhqbkgj35jbv8jjyu4zhjiu730mgv4vq6f9dzrrvth1ex9vir4sjmdm91snhiuoxs65bwkizz9s5gwvhwkuajym6t6wankjyba4h2khh0jppgdx8iw98fueg87se30ch6brh8pyasssur60ufzupoxpl4r1o1pkipiwb4qmftjmla5g8xmcnxixy775769jsl31vutam8x9yxiedd9ugk5xn0o9clflyu0ve0b7183rlnrku32542bk98kuvsq01d7ux01wg2m84ra82fnkyzgewnos4oe5mhz0qww491ljdkh75va8wrg1twzorr9xljf3swsqrja9nseuiwd9eqhxxgcm8c6v7zo3xk6ae6h7itkcbal4z42hfr0u2ml12nte3529rwegp3n73dc1lhk8jd3sk20pah8llggy9tc0n109agc2thlbt6mdnakabak71v85osbto2ypomlbo8vyebp0cdh0lekmt7qiwe6ics5vapxbh77z4v32s6dzl0tzsemx5te6k01ot029mots2gvcf1x2h86l9910ti2nxx2qyu6e19owpnwwjdz8gis3w33e5ugabgr7qxuaohedsv94czpx81wh6z52hs9nb3za5q03b28vc8dzna593ifd5gck8duwbvzy1djwl43fkwmnarnpfo6b0mulnmnqi8bi2grh2s0ppvx592dhe8l8ywhka58jlss0gr2ew8h1kctq2sxm9bciqt0at678oq5hvgbrttyabfzplxuljrotyi0k406d9t6o5yfnyuq1pcv3og8h85gakdweub7idb6ij2wv7mhbl07ntzc16hu17cpolaofesx01sqcb5joh6ep7xao5j7a4xqnlub8giv2dq01t5zt78pk5doprl6zlho9nujk1x173oh4frac6vb8v21xmyqbxygdwvkl8ytqv4w92a1j132m09wcedks3yol282c5wf8w6bc91ekt6zyfr5mhdc1ru1ot4ksfkx5vsmreygtjsl5msuc52ls4j463dpl7mnqwgrxe8af39b3b2y4kjut7tgng4essccsqyverzsou1c9fjy9wcz1nz0ky7vrqb0mac11bcds5vnu9rpwix3oo2frb8ope9t28t46e1p8zw58j1zyrrqudk50xzjpvoj5mrbkzi8rdlops77slynx1cb8hm1atkt3l7qifforaz3fwticwemnsyhau9jw3n7si31esbf4pythogutmkq30xnzg927u2d4umpv6f867sqozfm28bmskm7gvmniy2u2xm4pr4vqlli9ibl7rdgcgshbidh6d22nv30dat5y71a97b9pk33gv2865osm9k768yo4keeopivbnpogybv2hsmp4cxje5b4xczw8xg7ls48 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:52.721 10:48:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:52.721 { 00:05:52.721 "subsystems": [ 00:05:52.721 { 00:05:52.721 "subsystem": "bdev", 00:05:52.721 "config": [ 00:05:52.721 { 00:05:52.721 "params": { 00:05:52.721 "trtype": "pcie", 00:05:52.721 "traddr": "0000:00:10.0", 00:05:52.721 "name": "Nvme0" 00:05:52.721 }, 00:05:52.721 "method": "bdev_nvme_attach_controller" 00:05:52.721 }, 00:05:52.721 { 00:05:52.721 "method": "bdev_wait_for_examine" 00:05:52.721 } 00:05:52.721 ] 00:05:52.721 } 00:05:52.721 ] 00:05:52.721 } 00:05:52.721 [2024-12-09 10:48:45.783786] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:52.721 [2024-12-09 10:48:45.784203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60230 ] 00:05:52.979 [2024-12-09 10:48:45.936263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.979 [2024-12-09 10:48:45.994794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.979 [2024-12-09 10:48:46.038858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.979  [2024-12-09T10:48:46.416Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:53.237 00:05:53.237 10:48:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:53.237 10:48:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:53.237 10:48:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:53.237 10:48:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:53.495 [2024-12-09 10:48:46.424021] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:53.495 [2024-12-09 10:48:46.424201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60250 ] 00:05:53.495 { 00:05:53.495 "subsystems": [ 00:05:53.495 { 00:05:53.495 "subsystem": "bdev", 00:05:53.495 "config": [ 00:05:53.495 { 00:05:53.495 "params": { 00:05:53.495 "trtype": "pcie", 00:05:53.495 "traddr": "0000:00:10.0", 00:05:53.495 "name": "Nvme0" 00:05:53.495 }, 00:05:53.495 "method": "bdev_nvme_attach_controller" 00:05:53.495 }, 00:05:53.495 { 00:05:53.495 "method": "bdev_wait_for_examine" 00:05:53.495 } 00:05:53.495 ] 00:05:53.495 } 00:05:53.495 ] 00:05:53.495 } 00:05:53.495 [2024-12-09 10:48:46.576706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.495 [2024-12-09 10:48:46.634733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.754 [2024-12-09 10:48:46.678352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.754  [2024-12-09T10:48:47.192Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:54.013 00:05:54.013 10:48:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ d2yh4hmtm9ntkrk3y1qehyq1k3sc3snsynn5qg27slzggybq9s43j8i57f0zrtgaykafs5mybw59wwj4cbzcabol3d70zzgymoub8vx25y9npgxfj5ed2giqay7ea4yw29meidoi6iwz0oemmagxfcykckedzmciu8k1h91ersh7zlud66847srpz2vmipt527jtpl99t1eu92vk3u4ctw7tayhl7m2ghmju9l4ct2ni4toaj41o6n9uruds8n47inh9ikyuututqyw9eky393hynp3auvfo6k3ek2vfyk1g27i4rvo7mbdr4llg7ug3d9cw904xm8tjvisfo28kc84ukqrluttp57empdxll7v0ljrwiebzn7p2f7eh2ur6ccnbbxufvsk18lx7o1rd4b08bx0p1rxwdsfiiooywr6t5iknhavl5f0oh9n8ospxk1y80z25tiplyj8cg1ar81krzpzdmfxkwz4h75pgzsk5ax7etyj8f91vplbqanfff92yufd99lrwpuuwde1iyknuzugg0ugxhltg5o2n1b885ri2s5xzye993r07ymhptol1yrfvbk51hb0o6vhw2nz1yw2lbk5nmbkayxea0pneu37rl32xcc9gsk47s92p5ya1qb0regi4bb463ev9pwfnuyqrh55o94wze69af5jqpywrceh08xr298cdz3dordwaseb8it2hkaj3770je02szdf2wtgg2hyr8w4vuz3fmr65wlubkf7rruu8szyi2jrdchdq2wdyov5oa6f6hksdvlbkzgaxcvs5c1enhd9ifcavvgnk00gf9zggaofdd8i42kqlmdomj6v6o0cesqjp17475lfr1op3q60q4b7bi0zfale9a6hoi4z7p7p7aztqmz1quir0smeo7y3atn0tmdurk15r6c46x5uj089hqpb70878vihs12v8plc15p57juhf8yctv5t9t32nu3dc1n335ue2up0826vyosjoexftf50vh3jaqpybn1dir0laxqa00cexxuw7kap8u95nb7ckammu7dzfz680881ha2aqg5le9ul8y9ag4wlnqfg479nk8dax04p8d68gt6fc6iln33gznjo7nl5p8nra05691s2dup02ciuhzry9uifzhv2xjd17wg46rx6linn2c32pof3w7wamxcmqq5yyq0g2h2boga71emg4ognbgs9nklp4grxbttbpnvs97bv9s58uccielaie3sw0d3vp3cwokmepnccmabxtbk6sg5rguftr34nrgvbeh0j7tjsijskkh5787yksbs3njn3xheyth1hv4009akzn478ns3a6460jiychxds16nhgvg4jr79guech5i33oig3a29lug30qt1d0yftmq4yng3ywlabkd5z3g27fkbsen1mfpz4lwr740spq4uphp8oxg2c19zxs7vo75ewpd4m5oqw1scpdqowyjctq6mz07wtna546k32dx0mov65v46luuqy20ah2p21j9p9aa5pwk5ebnaf4xfzmavjwv6o7ej7ot1va5ivw1zuyidq85gqz0zlhp16q3ks19ost6k6kx284oh918tx1d2n7zay9uopi2f77qh56rej3l5wz1sewasca9u74ccccn6fz8p0bucq8rnqw8r193qfc4hrr7bitn3tnepr1n27tek22w7v7mdojmezutwksf8fxs92i1h7c7iyd19u0vpuh343qu8y9038h8qvxv226hljc0dzqnrftaq4k84hfvf1quvx5yqw16tg2botqhz3pf621nou531akq9yburtxkwbxr1sfgwktpa4vtywyag9s5su619o9x6ekrqi57cqbrzjlkh2y891ace5yow7pocwt4zfjekonl3hvtrw1qo7f85l7rwopnccsvkqa85572u44qt7bhh4i76d4ijgoza9wassfzq9yc0ktuh0abhw1fq5j45eiwborakw1u5zcwgljkapqbu6fgk92h8gfhmr69mu4hphxgzg3xb59lvpq8x21p2q3ujtx3imkhq596r260s3h6ispqnt0r0l3yuzmr6yc7a9b6mbfx9atk65jmh37xk20jb6qgysdvo1n8fqtu0j0kj0lcu6z23cwyub23od4j492z9bgdvp69eyu08gt87jbcakk0q251hzzk1im7dr86h7inm6k22h6fyale8l1z57cmr4oajmwp4ux1rzjcf5hr3wze8w2hpqi7fazoaujxaffoyz2fw0ruuydr0dijg8n85vz9eyfqo4wdg34jr6npxxvmpt95jv14jbnq7rq5ulpimn0ztii5pvmd65tctrsvr8sji0er3wnx271ltd2jtubidgp3uze3h35r7e7uati6b59lwojjdh05vvvefn0ridw0197qmfev7wi9z8vzy4bwdpadkqsjvevilc07lqkae9qld6gar291g32hvqckhvnw04gdgjbfi9ojdx3bhsnt4aqmxew9ple8baysbd7t1hfkdy4wqes8rgywds40g1eg0j4ylqlmrnek3ugahlawrh8tab0kapvr1id42vtyavif2iytcttycshn5q7hqestkf0gc15evx8abpjx5flmc05r7eq9ttln3c9799ieccy5l8ufv5470zdhmwggx7dmxs3imww3zcpvyn9x9mxhhqbkgj35jbv8jjyu4zhjiu730mgv4vq6f9dzrrvth1ex9vir4sjmdm91snhiuoxs65bwkizz9s5gwvhwkuajym6t6wankjyba4h2khh0jppgdx8iw98fueg87se30ch6brh8pyasssur60ufzupoxpl4r1o1pkipiwb4qmftjmla5g8xmcnxixy775769jsl31vutam8x9yxiedd9ugk5xn0o9clflyu0ve0b7183rlnrku32542bk98kuvsq01d7ux01wg2m84ra82fnkyzgewnos4oe5mhz0qww491ljdkh75va8wrg1twzorr9xljf3swsqrja9nseuiwd9eqhxxgcm8c6v7zo3xk6ae6h7itkcbal4z42hfr0u2ml12nte3529rwegp3n73dc1lhk8jd3sk20pah8llggy9tc0n109agc2thlbt6mdnakabak71v85osbto2ypomlbo8vyebp0cdh0lekmt7qiwe6ics5vapxbh77z4v32s6dzl0tzsemx5te6k01ot029mots2gvcf1x2h86l9910ti2nxx2qyu6e19owpnwwjdz8gis3w33e5ugabgr7qxuaohedsv94czpx81wh6z52hs9nb3za5q03b28vc8dzna593ifd5gck8duwbvzy1djwl43fkwmnarnpfo6b0mulnmnqi8bi2grh2s0ppvx592dhe8l8ywhka58jlss0gr2ew8h1kctq2sxm9bciqt0at678oq5hvgbrttyabfzplxuljrotyi0k406d9t6o5yfnyuq1pcv3og8h85gakdweub7idb6ij2wv7mhbl07ntzc16hu17cpolaofesx01sqcb5joh6ep7xao5j7a4xqnlub8giv2dq01t5zt78pk5doprl6zlho9nujk1x173oh4frac6vb8v21xmyqbxygdwvkl8ytqv4w92a1j132m09wcedks3yol282c5wf8w6bc91ekt6zyfr5mhdc1ru1ot4ksfkx5vsmreygtjsl5msuc52ls4j463dpl7mnqwgrxe8af39b3b2y4kjut7tgng4essccsqyverzsou1c9fjy9wcz1nz0ky7vrqb0mac11bcds5vnu9rpwix3oo2frb8ope9t28t46e1p8zw58j1zyrrqudk50xzjpvoj5mrbkzi8rdlops77slynx1cb8hm1atkt3l7qifforaz3fwticwemnsyhau9jw3n7si31esbf4pythogutmkq30xnzg927u2d4umpv6f867sqozfm28bmskm7gvmniy2u2xm4pr4vqlli9ibl7rdgcgshbidh6d22nv30dat5y71a97b9pk33gv2865osm9k768yo4keeopivbnpogybv2hsmp4cxje5b4xczw8xg7ls48 == \d\2\y\h\4\h\m\t\m\9\n\t\k\r\k\3\y\1\q\e\h\y\q\1\k\3\s\c\3\s\n\s\y\n\n\5\q\g\2\7\s\l\z\g\g\y\b\q\9\s\4\3\j\8\i\5\7\f\0\z\r\t\g\a\y\k\a\f\s\5\m\y\b\w\5\9\w\w\j\4\c\b\z\c\a\b\o\l\3\d\7\0\z\z\g\y\m\o\u\b\8\v\x\2\5\y\9\n\p\g\x\f\j\5\e\d\2\g\i\q\a\y\7\e\a\4\y\w\2\9\m\e\i\d\o\i\6\i\w\z\0\o\e\m\m\a\g\x\f\c\y\k\c\k\e\d\z\m\c\i\u\8\k\1\h\9\1\e\r\s\h\7\z\l\u\d\6\6\8\4\7\s\r\p\z\2\v\m\i\p\t\5\2\7\j\t\p\l\9\9\t\1\e\u\9\2\v\k\3\u\4\c\t\w\7\t\a\y\h\l\7\m\2\g\h\m\j\u\9\l\4\c\t\2\n\i\4\t\o\a\j\4\1\o\6\n\9\u\r\u\d\s\8\n\4\7\i\n\h\9\i\k\y\u\u\t\u\t\q\y\w\9\e\k\y\3\9\3\h\y\n\p\3\a\u\v\f\o\6\k\3\e\k\2\v\f\y\k\1\g\2\7\i\4\r\v\o\7\m\b\d\r\4\l\l\g\7\u\g\3\d\9\c\w\9\0\4\x\m\8\t\j\v\i\s\f\o\2\8\k\c\8\4\u\k\q\r\l\u\t\t\p\5\7\e\m\p\d\x\l\l\7\v\0\l\j\r\w\i\e\b\z\n\7\p\2\f\7\e\h\2\u\r\6\c\c\n\b\b\x\u\f\v\s\k\1\8\l\x\7\o\1\r\d\4\b\0\8\b\x\0\p\1\r\x\w\d\s\f\i\i\o\o\y\w\r\6\t\5\i\k\n\h\a\v\l\5\f\0\o\h\9\n\8\o\s\p\x\k\1\y\8\0\z\2\5\t\i\p\l\y\j\8\c\g\1\a\r\8\1\k\r\z\p\z\d\m\f\x\k\w\z\4\h\7\5\p\g\z\s\k\5\a\x\7\e\t\y\j\8\f\9\1\v\p\l\b\q\a\n\f\f\f\9\2\y\u\f\d\9\9\l\r\w\p\u\u\w\d\e\1\i\y\k\n\u\z\u\g\g\0\u\g\x\h\l\t\g\5\o\2\n\1\b\8\8\5\r\i\2\s\5\x\z\y\e\9\9\3\r\0\7\y\m\h\p\t\o\l\1\y\r\f\v\b\k\5\1\h\b\0\o\6\v\h\w\2\n\z\1\y\w\2\l\b\k\5\n\m\b\k\a\y\x\e\a\0\p\n\e\u\3\7\r\l\3\2\x\c\c\9\g\s\k\4\7\s\9\2\p\5\y\a\1\q\b\0\r\e\g\i\4\b\b\4\6\3\e\v\9\p\w\f\n\u\y\q\r\h\5\5\o\9\4\w\z\e\6\9\a\f\5\j\q\p\y\w\r\c\e\h\0\8\x\r\2\9\8\c\d\z\3\d\o\r\d\w\a\s\e\b\8\i\t\2\h\k\a\j\3\7\7\0\j\e\0\2\s\z\d\f\2\w\t\g\g\2\h\y\r\8\w\4\v\u\z\3\f\m\r\6\5\w\l\u\b\k\f\7\r\r\u\u\8\s\z\y\i\2\j\r\d\c\h\d\q\2\w\d\y\o\v\5\o\a\6\f\6\h\k\s\d\v\l\b\k\z\g\a\x\c\v\s\5\c\1\e\n\h\d\9\i\f\c\a\v\v\g\n\k\0\0\g\f\9\z\g\g\a\o\f\d\d\8\i\4\2\k\q\l\m\d\o\m\j\6\v\6\o\0\c\e\s\q\j\p\1\7\4\7\5\l\f\r\1\o\p\3\q\6\0\q\4\b\7\b\i\0\z\f\a\l\e\9\a\6\h\o\i\4\z\7\p\7\p\7\a\z\t\q\m\z\1\q\u\i\r\0\s\m\e\o\7\y\3\a\t\n\0\t\m\d\u\r\k\1\5\r\6\c\4\6\x\5\u\j\0\8\9\h\q\p\b\7\0\8\7\8\v\i\h\s\1\2\v\8\p\l\c\1\5\p\5\7\j\u\h\f\8\y\c\t\v\5\t\9\t\3\2\n\u\3\d\c\1\n\3\3\5\u\e\2\u\p\0\8\2\6\v\y\o\s\j\o\e\x\f\t\f\5\0\v\h\3\j\a\q\p\y\b\n\1\d\i\r\0\l\a\x\q\a\0\0\c\e\x\x\u\w\7\k\a\p\8\u\9\5\n\b\7\c\k\a\m\m\u\7\d\z\f\z\6\8\0\8\8\1\h\a\2\a\q\g\5\l\e\9\u\l\8\y\9\a\g\4\w\l\n\q\f\g\4\7\9\n\k\8\d\a\x\0\4\p\8\d\6\8\g\t\6\f\c\6\i\l\n\3\3\g\z\n\j\o\7\n\l\5\p\8\n\r\a\0\5\6\9\1\s\2\d\u\p\0\2\c\i\u\h\z\r\y\9\u\i\f\z\h\v\2\x\j\d\1\7\w\g\4\6\r\x\6\l\i\n\n\2\c\3\2\p\o\f\3\w\7\w\a\m\x\c\m\q\q\5\y\y\q\0\g\2\h\2\b\o\g\a\7\1\e\m\g\4\o\g\n\b\g\s\9\n\k\l\p\4\g\r\x\b\t\t\b\p\n\v\s\9\7\b\v\9\s\5\8\u\c\c\i\e\l\a\i\e\3\s\w\0\d\3\v\p\3\c\w\o\k\m\e\p\n\c\c\m\a\b\x\t\b\k\6\s\g\5\r\g\u\f\t\r\3\4\n\r\g\v\b\e\h\0\j\7\t\j\s\i\j\s\k\k\h\5\7\8\7\y\k\s\b\s\3\n\j\n\3\x\h\e\y\t\h\1\h\v\4\0\0\9\a\k\z\n\4\7\8\n\s\3\a\6\4\6\0\j\i\y\c\h\x\d\s\1\6\n\h\g\v\g\4\j\r\7\9\g\u\e\c\h\5\i\3\3\o\i\g\3\a\2\9\l\u\g\3\0\q\t\1\d\0\y\f\t\m\q\4\y\n\g\3\y\w\l\a\b\k\d\5\z\3\g\2\7\f\k\b\s\e\n\1\m\f\p\z\4\l\w\r\7\4\0\s\p\q\4\u\p\h\p\8\o\x\g\2\c\1\9\z\x\s\7\v\o\7\5\e\w\p\d\4\m\5\o\q\w\1\s\c\p\d\q\o\w\y\j\c\t\q\6\m\z\0\7\w\t\n\a\5\4\6\k\3\2\d\x\0\m\o\v\6\5\v\4\6\l\u\u\q\y\2\0\a\h\2\p\2\1\j\9\p\9\a\a\5\p\w\k\5\e\b\n\a\f\4\x\f\z\m\a\v\j\w\v\6\o\7\e\j\7\o\t\1\v\a\5\i\v\w\1\z\u\y\i\d\q\8\5\g\q\z\0\z\l\h\p\1\6\q\3\k\s\1\9\o\s\t\6\k\6\k\x\2\8\4\o\h\9\1\8\t\x\1\d\2\n\7\z\a\y\9\u\o\p\i\2\f\7\7\q\h\5\6\r\e\j\3\l\5\w\z\1\s\e\w\a\s\c\a\9\u\7\4\c\c\c\c\n\6\f\z\8\p\0\b\u\c\q\8\r\n\q\w\8\r\1\9\3\q\f\c\4\h\r\r\7\b\i\t\n\3\t\n\e\p\r\1\n\2\7\t\e\k\2\2\w\7\v\7\m\d\o\j\m\e\z\u\t\w\k\s\f\8\f\x\s\9\2\i\1\h\7\c\7\i\y\d\1\9\u\0\v\p\u\h\3\4\3\q\u\8\y\9\0\3\8\h\8\q\v\x\v\2\2\6\h\l\j\c\0\d\z\q\n\r\f\t\a\q\4\k\8\4\h\f\v\f\1\q\u\v\x\5\y\q\w\1\6\t\g\2\b\o\t\q\h\z\3\p\f\6\2\1\n\o\u\5\3\1\a\k\q\9\y\b\u\r\t\x\k\w\b\x\r\1\s\f\g\w\k\t\p\a\4\v\t\y\w\y\a\g\9\s\5\s\u\6\1\9\o\9\x\6\e\k\r\q\i\5\7\c\q\b\r\z\j\l\k\h\2\y\8\9\1\a\c\e\5\y\o\w\7\p\o\c\w\t\4\z\f\j\e\k\o\n\l\3\h\v\t\r\w\1\q\o\7\f\8\5\l\7\r\w\o\p\n\c\c\s\v\k\q\a\8\5\5\7\2\u\4\4\q\t\7\b\h\h\4\i\7\6\d\4\i\j\g\o\z\a\9\w\a\s\s\f\z\q\9\y\c\0\k\t\u\h\0\a\b\h\w\1\f\q\5\j\4\5\e\i\w\b\o\r\a\k\w\1\u\5\z\c\w\g\l\j\k\a\p\q\b\u\6\f\g\k\9\2\h\8\g\f\h\m\r\6\9\m\u\4\h\p\h\x\g\z\g\3\x\b\5\9\l\v\p\q\8\x\2\1\p\2\q\3\u\j\t\x\3\i\m\k\h\q\5\9\6\r\2\6\0\s\3\h\6\i\s\p\q\n\t\0\r\0\l\3\y\u\z\m\r\6\y\c\7\a\9\b\6\m\b\f\x\9\a\t\k\6\5\j\m\h\3\7\x\k\2\0\j\b\6\q\g\y\s\d\v\o\1\n\8\f\q\t\u\0\j\0\k\j\0\l\c\u\6\z\2\3\c\w\y\u\b\2\3\o\d\4\j\4\9\2\z\9\b\g\d\v\p\6\9\e\y\u\0\8\g\t\8\7\j\b\c\a\k\k\0\q\2\5\1\h\z\z\k\1\i\m\7\d\r\8\6\h\7\i\n\m\6\k\2\2\h\6\f\y\a\l\e\8\l\1\z\5\7\c\m\r\4\o\a\j\m\w\p\4\u\x\1\r\z\j\c\f\5\h\r\3\w\z\e\8\w\2\h\p\q\i\7\f\a\z\o\a\u\j\x\a\f\f\o\y\z\2\f\w\0\r\u\u\y\d\r\0\d\i\j\g\8\n\8\5\v\z\9\e\y\f\q\o\4\w\d\g\3\4\j\r\6\n\p\x\x\v\m\p\t\9\5\j\v\1\4\j\b\n\q\7\r\q\5\u\l\p\i\m\n\0\z\t\i\i\5\p\v\m\d\6\5\t\c\t\r\s\v\r\8\s\j\i\0\e\r\3\w\n\x\2\7\1\l\t\d\2\j\t\u\b\i\d\g\p\3\u\z\e\3\h\3\5\r\7\e\7\u\a\t\i\6\b\5\9\l\w\o\j\j\d\h\0\5\v\v\v\e\f\n\0\r\i\d\w\0\1\9\7\q\m\f\e\v\7\w\i\9\z\8\v\z\y\4\b\w\d\p\a\d\k\q\s\j\v\e\v\i\l\c\0\7\l\q\k\a\e\9\q\l\d\6\g\a\r\2\9\1\g\3\2\h\v\q\c\k\h\v\n\w\0\4\g\d\g\j\b\f\i\9\o\j\d\x\3\b\h\s\n\t\4\a\q\m\x\e\w\9\p\l\e\8\b\a\y\s\b\d\7\t\1\h\f\k\d\y\4\w\q\e\s\8\r\g\y\w\d\s\4\0\g\1\e\g\0\j\4\y\l\q\l\m\r\n\e\k\3\u\g\a\h\l\a\w\r\h\8\t\a\b\0\k\a\p\v\r\1\i\d\4\2\v\t\y\a\v\i\f\2\i\y\t\c\t\t\y\c\s\h\n\5\q\7\h\q\e\s\t\k\f\0\g\c\1\5\e\v\x\8\a\b\p\j\x\5\f\l\m\c\0\5\r\7\e\q\9\t\t\l\n\3\c\9\7\9\9\i\e\c\c\y\5\l\8\u\f\v\5\4\7\0\z\d\h\m\w\g\g\x\7\d\m\x\s\3\i\m\w\w\3\z\c\p\v\y\n\9\x\9\m\x\h\h\q\b\k\g\j\3\5\j\b\v\8\j\j\y\u\4\z\h\j\i\u\7\3\0\m\g\v\4\v\q\6\f\9\d\z\r\r\v\t\h\1\e\x\9\v\i\r\4\s\j\m\d\m\9\1\s\n\h\i\u\o\x\s\6\5\b\w\k\i\z\z\9\s\5\g\w\v\h\w\k\u\a\j\y\m\6\t\6\w\a\n\k\j\y\b\a\4\h\2\k\h\h\0\j\p\p\g\d\x\8\i\w\9\8\f\u\e\g\8\7\s\e\3\0\c\h\6\b\r\h\8\p\y\a\s\s\s\u\r\6\0\u\f\z\u\p\o\x\p\l\4\r\1\o\1\p\k\i\p\i\w\b\4\q\m\f\t\j\m\l\a\5\g\8\x\m\c\n\x\i\x\y\7\7\5\7\6\9\j\s\l\3\1\v\u\t\a\m\8\x\9\y\x\i\e\d\d\9\u\g\k\5\x\n\0\o\9\c\l\f\l\y\u\0\v\e\0\b\7\1\8\3\r\l\n\r\k\u\3\2\5\4\2\b\k\9\8\k\u\v\s\q\0\1\d\7\u\x\0\1\w\g\2\m\8\4\r\a\8\2\f\n\k\y\z\g\e\w\n\o\s\4\o\e\5\m\h\z\0\q\w\w\4\9\1\l\j\d\k\h\7\5\v\a\8\w\r\g\1\t\w\z\o\r\r\9\x\l\j\f\3\s\w\s\q\r\j\a\9\n\s\e\u\i\w\d\9\e\q\h\x\x\g\c\m\8\c\6\v\7\z\o\3\x\k\6\a\e\6\h\7\i\t\k\c\b\a\l\4\z\4\2\h\f\r\0\u\2\m\l\1\2\n\t\e\3\5\2\9\r\w\e\g\p\3\n\7\3\d\c\1\l\h\k\8\j\d\3\s\k\2\0\p\a\h\8\l\l\g\g\y\9\t\c\0\n\1\0\9\a\g\c\2\t\h\l\b\t\6\m\d\n\a\k\a\b\a\k\7\1\v\8\5\o\s\b\t\o\2\y\p\o\m\l\b\o\8\v\y\e\b\p\0\c\d\h\0\l\e\k\m\t\7\q\i\w\e\6\i\c\s\5\v\a\p\x\b\h\7\7\z\4\v\3\2\s\6\d\z\l\0\t\z\s\e\m\x\5\t\e\6\k\0\1\o\t\0\2\9\m\o\t\s\2\g\v\c\f\1\x\2\h\8\6\l\9\9\1\0\t\i\2\n\x\x\2\q\y\u\6\e\1\9\o\w\p\n\w\w\j\d\z\8\g\i\s\3\w\3\3\e\5\u\g\a\b\g\r\7\q\x\u\a\o\h\e\d\s\v\9\4\c\z\p\x\8\1\w\h\6\z\5\2\h\s\9\n\b\3\z\a\5\q\0\3\b\2\8\v\c\8\d\z\n\a\5\9\3\i\f\d\5\g\c\k\8\d\u\w\b\v\z\y\1\d\j\w\l\4\3\f\k\w\m\n\a\r\n\p\f\o\6\b\0\m\u\l\n\m\n\q\i\8\b\i\2\g\r\h\2\s\0\p\p\v\x\5\9\2\d\h\e\8\l\8\y\w\h\k\a\5\8\j\l\s\s\0\g\r\2\e\w\8\h\1\k\c\t\q\2\s\x\m\9\b\c\i\q\t\0\a\t\6\7\8\o\q\5\h\v\g\b\r\t\t\y\a\b\f\z\p\l\x\u\l\j\r\o\t\y\i\0\k\4\0\6\d\9\t\6\o\5\y\f\n\y\u\q\1\p\c\v\3\o\g\8\h\8\5\g\a\k\d\w\e\u\b\7\i\d\b\6\i\j\2\w\v\7\m\h\b\l\0\7\n\t\z\c\1\6\h\u\1\7\c\p\o\l\a\o\f\e\s\x\0\1\s\q\c\b\5\j\o\h\6\e\p\7\x\a\o\5\j\7\a\4\x\q\n\l\u\b\8\g\i\v\2\d\q\0\1\t\5\z\t\7\8\p\k\5\d\o\p\r\l\6\z\l\h\o\9\n\u\j\k\1\x\1\7\3\o\h\4\f\r\a\c\6\v\b\8\v\2\1\x\m\y\q\b\x\y\g\d\w\v\k\l\8\y\t\q\v\4\w\9\2\a\1\j\1\3\2\m\0\9\w\c\e\d\k\s\3\y\o\l\2\8\2\c\5\w\f\8\w\6\b\c\9\1\e\k\t\6\z\y\f\r\5\m\h\d\c\1\r\u\1\o\t\4\k\s\f\k\x\5\v\s\m\r\e\y\g\t\j\s\l\5\m\s\u\c\5\2\l\s\4\j\4\6\3\d\p\l\7\m\n\q\w\g\r\x\e\8\a\f\3\9\b\3\b\2\y\4\k\j\u\t\7\t\g\n\g\4\e\s\s\c\c\s\q\y\v\e\r\z\s\o\u\1\c\9\f\j\y\9\w\c\z\1\n\z\0\k\y\7\v\r\q\b\0\m\a\c\1\1\b\c\d\s\5\v\n\u\9\r\p\w\i\x\3\o\o\2\f\r\b\8\o\p\e\9\t\2\8\t\4\6\e\1\p\8\z\w\5\8\j\1\z\y\r\r\q\u\d\k\5\0\x\z\j\p\v\o\j\5\m\r\b\k\z\i\8\r\d\l\o\p\s\7\7\s\l\y\n\x\1\c\b\8\h\m\1\a\t\k\t\3\l\7\q\i\f\f\o\r\a\z\3\f\w\t\i\c\w\e\m\n\s\y\h\a\u\9\j\w\3\n\7\s\i\3\1\e\s\b\f\4\p\y\t\h\o\g\u\t\m\k\q\3\0\x\n\z\g\9\2\7\u\2\d\4\u\m\p\v\6\f\8\6\7\s\q\o\z\f\m\2\8\b\m\s\k\m\7\g\v\m\n\i\y\2\u\2\x\m\4\p\r\4\v\q\l\l\i\9\i\b\l\7\r\d\g\c\g\s\h\b\i\d\h\6\d\2\2\n\v\3\0\d\a\t\5\y\7\1\a\9\7\b\9\p\k\3\3\g\v\2\8\6\5\o\s\m\9\k\7\6\8\y\o\4\k\e\e\o\p\i\v\b\n\p\o\g\y\b\v\2\h\s\m\p\4\c\x\j\e\5\b\4\x\c\z\w\8\x\g\7\l\s\4\8 ]] 00:05:54.014 00:05:54.014 real 0m1.323s 00:05:54.014 user 0m0.952s 00:05:54.014 sys 0m0.510s 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.014 ************************************ 00:05:54.014 END TEST dd_rw_offset 00:05:54.014 ************************************ 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.014 10:48:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.014 [2024-12-09 10:48:47.100649] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:54.014 [2024-12-09 10:48:47.100924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60280 ] 00:05:54.014 { 00:05:54.014 "subsystems": [ 00:05:54.014 { 00:05:54.014 "subsystem": "bdev", 00:05:54.014 "config": [ 00:05:54.014 { 00:05:54.014 "params": { 00:05:54.014 "trtype": "pcie", 00:05:54.014 "traddr": "0000:00:10.0", 00:05:54.014 "name": "Nvme0" 00:05:54.014 }, 00:05:54.014 "method": "bdev_nvme_attach_controller" 00:05:54.014 }, 00:05:54.014 { 00:05:54.014 "method": "bdev_wait_for_examine" 00:05:54.014 } 00:05:54.014 ] 00:05:54.014 } 00:05:54.014 ] 00:05:54.014 } 00:05:54.273 [2024-12-09 10:48:47.244971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.273 [2024-12-09 10:48:47.308742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.273 [2024-12-09 10:48:47.352129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.531  [2024-12-09T10:48:47.710Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:54.531 00:05:54.531 10:48:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.531 ************************************ 00:05:54.531 END TEST spdk_dd_basic_rw 00:05:54.531 ************************************ 00:05:54.531 00:05:54.531 real 0m17.496s 00:05:54.531 user 0m12.799s 00:05:54.531 sys 0m5.902s 00:05:54.531 10:48:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.531 10:48:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.791 10:48:47 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:54.791 10:48:47 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.791 10:48:47 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.791 10:48:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:54.791 ************************************ 00:05:54.791 START TEST spdk_dd_posix 00:05:54.791 ************************************ 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:05:54.791 * Looking for test storage... 00:05:54.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.791 --rc genhtml_branch_coverage=1 00:05:54.791 --rc genhtml_function_coverage=1 00:05:54.791 --rc genhtml_legend=1 00:05:54.791 --rc geninfo_all_blocks=1 00:05:54.791 --rc geninfo_unexecuted_blocks=1 00:05:54.791 00:05:54.791 ' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.791 --rc genhtml_branch_coverage=1 00:05:54.791 --rc genhtml_function_coverage=1 00:05:54.791 --rc genhtml_legend=1 00:05:54.791 --rc geninfo_all_blocks=1 00:05:54.791 --rc geninfo_unexecuted_blocks=1 00:05:54.791 00:05:54.791 ' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.791 --rc genhtml_branch_coverage=1 00:05:54.791 --rc genhtml_function_coverage=1 00:05:54.791 --rc genhtml_legend=1 00:05:54.791 --rc geninfo_all_blocks=1 00:05:54.791 --rc geninfo_unexecuted_blocks=1 00:05:54.791 00:05:54.791 ' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.791 --rc genhtml_branch_coverage=1 00:05:54.791 --rc genhtml_function_coverage=1 00:05:54.791 --rc genhtml_legend=1 00:05:54.791 --rc geninfo_all_blocks=1 00:05:54.791 --rc geninfo_unexecuted_blocks=1 00:05:54.791 00:05:54.791 ' 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:05:54.791 10:48:47 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:05:54.792 * First test run, liburing in use 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 ************************************ 00:05:54.792 START TEST dd_flag_append 00:05:54.792 ************************************ 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=htcz7m1k5qxboiido8n1d6y5y8ykkxzm 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=4mq122p0y42jp45k5cija6rignr4krld 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s htcz7m1k5qxboiido8n1d6y5y8ykkxzm 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 4mq122p0y42jp45k5cija6rignr4krld 00:05:54.792 10:48:47 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:05:55.051 [2024-12-09 10:48:48.012105] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:55.051 [2024-12-09 10:48:48.012182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60346 ] 00:05:55.051 [2024-12-09 10:48:48.165371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.051 [2024-12-09 10:48:48.225817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.310 [2024-12-09 10:48:48.270166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.310  [2024-12-09T10:48:48.747Z] Copying: 32/32 [B] (average 31 kBps) 00:05:55.568 00:05:55.568 ************************************ 00:05:55.568 END TEST dd_flag_append 00:05:55.568 ************************************ 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 4mq122p0y42jp45k5cija6rignr4krldhtcz7m1k5qxboiido8n1d6y5y8ykkxzm == \4\m\q\1\2\2\p\0\y\4\2\j\p\4\5\k\5\c\i\j\a\6\r\i\g\n\r\4\k\r\l\d\h\t\c\z\7\m\1\k\5\q\x\b\o\i\i\d\o\8\n\1\d\6\y\5\y\8\y\k\k\x\z\m ]] 00:05:55.568 00:05:55.568 real 0m0.566s 00:05:55.568 user 0m0.336s 00:05:55.568 sys 0m0.231s 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:55.568 ************************************ 00:05:55.568 START TEST dd_flag_directory 00:05:55.568 ************************************ 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:55.568 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:55.569 10:48:48 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:55.569 [2024-12-09 10:48:48.647671] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:55.569 [2024-12-09 10:48:48.647912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60380 ] 00:05:55.827 [2024-12-09 10:48:48.803911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.827 [2024-12-09 10:48:48.861642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.827 [2024-12-09 10:48:48.905151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.827 [2024-12-09 10:48:48.938139] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:55.827 [2024-12-09 10:48:48.938188] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:55.827 [2024-12-09 10:48:48.938198] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.085 [2024-12-09 10:48:49.038370] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:56.085 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:05:56.085 [2024-12-09 10:48:49.209571] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:56.085 [2024-12-09 10:48:49.209733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60390 ] 00:05:56.343 [2024-12-09 10:48:49.362145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.343 [2024-12-09 10:48:49.420069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.343 [2024-12-09 10:48:49.463002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.343 [2024-12-09 10:48:49.495405] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:56.343 [2024-12-09 10:48:49.495454] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:05:56.343 [2024-12-09 10:48:49.495464] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.600 [2024-12-09 10:48:49.594502] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.601 00:05:56.601 real 0m1.118s 00:05:56.601 user 0m0.650s 00:05:56.601 sys 0m0.257s 00:05:56.601 ************************************ 00:05:56.601 END TEST dd_flag_directory 00:05:56.601 ************************************ 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:56.601 ************************************ 00:05:56.601 START TEST dd_flag_nofollow 00:05:56.601 ************************************ 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.601 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:56.859 10:48:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.859 [2024-12-09 10:48:49.838263] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:56.859 [2024-12-09 10:48:49.838340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60418 ] 00:05:56.859 [2024-12-09 10:48:49.990536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.117 [2024-12-09 10:48:50.048732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.117 [2024-12-09 10:48:50.093056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.117 [2024-12-09 10:48:50.126306] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:57.117 [2024-12-09 10:48:50.126455] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:05:57.117 [2024-12-09 10:48:50.126467] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.117 [2024-12-09 10:48:50.227236] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:57.375 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:05:57.375 [2024-12-09 10:48:50.400136] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:57.375 [2024-12-09 10:48:50.400215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60433 ] 00:05:57.633 [2024-12-09 10:48:50.554332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.633 [2024-12-09 10:48:50.614244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.633 [2024-12-09 10:48:50.658215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.633 [2024-12-09 10:48:50.691042] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:57.633 [2024-12-09 10:48:50.691094] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:05:57.633 [2024-12-09 10:48:50.691105] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.633 [2024-12-09 10:48:50.791940] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:57.892 10:48:50 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:57.892 [2024-12-09 10:48:50.975951] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:57.892 [2024-12-09 10:48:50.976120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60435 ] 00:05:58.149 [2024-12-09 10:48:51.127062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.149 [2024-12-09 10:48:51.185370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.149 [2024-12-09 10:48:51.229459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.149  [2024-12-09T10:48:51.587Z] Copying: 512/512 [B] (average 500 kBps) 00:05:58.408 00:05:58.408 ************************************ 00:05:58.408 END TEST dd_flag_nofollow 00:05:58.408 ************************************ 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ereldnluzput4y2x9sa5walw2o1vzt1o5zckd8hdcfdsjn920vcvj2oq9oibc6mrdbodbdzvqt5fyqpywfmkx47wwy4mynpl6kpl205riq9j8su2bnv9h4s9wd8tlxfprpyqwz7adwwx2yp9bgd9jzjx1oxee4w5wav6u3g2fhuch5ke2odow5ytgz5nj4vysc6c9982w1kzr9lqvrvawkq606dlx10z3ysrh7b9brmq4cgmqyz9fhu0o7cnqe9tugn1z5gte97bbdg8a35o3v8yg8tgvrn0r2kf3hf1k92a6rbi6uzuxc8nbzf8ty0fclta2t94fprqknflf0vpxgv5779w2dh03xwopff3wno2wo9f90ry94j7k7yn22rmj3tvaiswoy6tp6bzys5385m7p3dcyhy3ffx9jixlfngwjz97ce2yukg8b2neai1u13fi69oexydqous8tmqksjw7a8b1d1wlnhfkc5te09o4nls9in564yr1atk1ecf8 == \e\r\e\l\d\n\l\u\z\p\u\t\4\y\2\x\9\s\a\5\w\a\l\w\2\o\1\v\z\t\1\o\5\z\c\k\d\8\h\d\c\f\d\s\j\n\9\2\0\v\c\v\j\2\o\q\9\o\i\b\c\6\m\r\d\b\o\d\b\d\z\v\q\t\5\f\y\q\p\y\w\f\m\k\x\4\7\w\w\y\4\m\y\n\p\l\6\k\p\l\2\0\5\r\i\q\9\j\8\s\u\2\b\n\v\9\h\4\s\9\w\d\8\t\l\x\f\p\r\p\y\q\w\z\7\a\d\w\w\x\2\y\p\9\b\g\d\9\j\z\j\x\1\o\x\e\e\4\w\5\w\a\v\6\u\3\g\2\f\h\u\c\h\5\k\e\2\o\d\o\w\5\y\t\g\z\5\n\j\4\v\y\s\c\6\c\9\9\8\2\w\1\k\z\r\9\l\q\v\r\v\a\w\k\q\6\0\6\d\l\x\1\0\z\3\y\s\r\h\7\b\9\b\r\m\q\4\c\g\m\q\y\z\9\f\h\u\0\o\7\c\n\q\e\9\t\u\g\n\1\z\5\g\t\e\9\7\b\b\d\g\8\a\3\5\o\3\v\8\y\g\8\t\g\v\r\n\0\r\2\k\f\3\h\f\1\k\9\2\a\6\r\b\i\6\u\z\u\x\c\8\n\b\z\f\8\t\y\0\f\c\l\t\a\2\t\9\4\f\p\r\q\k\n\f\l\f\0\v\p\x\g\v\5\7\7\9\w\2\d\h\0\3\x\w\o\p\f\f\3\w\n\o\2\w\o\9\f\9\0\r\y\9\4\j\7\k\7\y\n\2\2\r\m\j\3\t\v\a\i\s\w\o\y\6\t\p\6\b\z\y\s\5\3\8\5\m\7\p\3\d\c\y\h\y\3\f\f\x\9\j\i\x\l\f\n\g\w\j\z\9\7\c\e\2\y\u\k\g\8\b\2\n\e\a\i\1\u\1\3\f\i\6\9\o\e\x\y\d\q\o\u\s\8\t\m\q\k\s\j\w\7\a\8\b\1\d\1\w\l\n\h\f\k\c\5\t\e\0\9\o\4\n\l\s\9\i\n\5\6\4\y\r\1\a\t\k\1\e\c\f\8 ]] 00:05:58.408 00:05:58.408 real 0m1.708s 00:05:58.408 user 0m0.999s 00:05:58.408 sys 0m0.499s 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:05:58.408 ************************************ 00:05:58.408 START TEST dd_flag_noatime 00:05:58.408 ************************************ 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733741331 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733741331 00:05:58.408 10:48:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:05:59.784 10:48:52 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:59.784 [2024-12-09 10:48:52.629820] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:05:59.784 [2024-12-09 10:48:52.629896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:05:59.784 [2024-12-09 10:48:52.790907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.784 [2024-12-09 10:48:52.851579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.784 [2024-12-09 10:48:52.899946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.784  [2024-12-09T10:48:53.222Z] Copying: 512/512 [B] (average 500 kBps) 00:06:00.043 00:06:00.043 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.043 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733741331 )) 00:06:00.043 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.043 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733741331 )) 00:06:00.043 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.043 [2024-12-09 10:48:53.220690] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:00.043 [2024-12-09 10:48:53.220790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60493 ] 00:06:00.302 [2024-12-09 10:48:53.374475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.302 [2024-12-09 10:48:53.434301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.560 [2024-12-09 10:48:53.483726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.560  [2024-12-09T10:48:53.739Z] Copying: 512/512 [B] (average 500 kBps) 00:06:00.560 00:06:00.560 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733741333 )) 00:06:00.819 00:06:00.819 real 0m2.208s 00:06:00.819 user 0m0.683s 00:06:00.819 sys 0m0.537s 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:00.819 ************************************ 00:06:00.819 END TEST dd_flag_noatime 00:06:00.819 ************************************ 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:00.819 ************************************ 00:06:00.819 START TEST dd_flags_misc 00:06:00.819 ************************************ 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:00.819 10:48:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:00.819 [2024-12-09 10:48:53.861991] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:00.819 [2024-12-09 10:48:53.862130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60527 ] 00:06:01.077 [2024-12-09 10:48:54.014795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.077 [2024-12-09 10:48:54.071241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.077 [2024-12-09 10:48:54.114376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.077  [2024-12-09T10:48:54.515Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.336 00:06:01.336 10:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nhvf3k59uz6tr15gfctkbkbsw7kwmh06whkyezp91z13zxtpuroo86fm72xevuc5dh6bym3atq89emn31o19z57l00wjihajqx9dignesid0097ji52gl5twwxpl4vthsfn6bzq3tp99fffal9fm357osorepnn6kmn27qr7h1jln9wy1hfc88w1v157sg09sj388trhvejmdf4ijavj924p7zk8xuw3rh54fuffzpmhoezere6scfrbqwupv14uopf114hmwnuoo28877mg4uhv7z478xebxzjkeot3lzrrbrz8erk5lka2r8bcuoflb50tzusz56e8f94toqlixxd8yzout75ix7hlx2oefgkg2ux6gtilf9qc5bdqgmp7jslxxnocfcplzqttxlfxsnkcxzj5ttpzx3kyf5ftt483cbhgk31vhvt94usvmf1kfumervl6i9hx22f3gq2u6qnaqmapmkqnqoxpcv4qtdhnye7coyyrtfia0r2ftzxv == \n\h\v\f\3\k\5\9\u\z\6\t\r\1\5\g\f\c\t\k\b\k\b\s\w\7\k\w\m\h\0\6\w\h\k\y\e\z\p\9\1\z\1\3\z\x\t\p\u\r\o\o\8\6\f\m\7\2\x\e\v\u\c\5\d\h\6\b\y\m\3\a\t\q\8\9\e\m\n\3\1\o\1\9\z\5\7\l\0\0\w\j\i\h\a\j\q\x\9\d\i\g\n\e\s\i\d\0\0\9\7\j\i\5\2\g\l\5\t\w\w\x\p\l\4\v\t\h\s\f\n\6\b\z\q\3\t\p\9\9\f\f\f\a\l\9\f\m\3\5\7\o\s\o\r\e\p\n\n\6\k\m\n\2\7\q\r\7\h\1\j\l\n\9\w\y\1\h\f\c\8\8\w\1\v\1\5\7\s\g\0\9\s\j\3\8\8\t\r\h\v\e\j\m\d\f\4\i\j\a\v\j\9\2\4\p\7\z\k\8\x\u\w\3\r\h\5\4\f\u\f\f\z\p\m\h\o\e\z\e\r\e\6\s\c\f\r\b\q\w\u\p\v\1\4\u\o\p\f\1\1\4\h\m\w\n\u\o\o\2\8\8\7\7\m\g\4\u\h\v\7\z\4\7\8\x\e\b\x\z\j\k\e\o\t\3\l\z\r\r\b\r\z\8\e\r\k\5\l\k\a\2\r\8\b\c\u\o\f\l\b\5\0\t\z\u\s\z\5\6\e\8\f\9\4\t\o\q\l\i\x\x\d\8\y\z\o\u\t\7\5\i\x\7\h\l\x\2\o\e\f\g\k\g\2\u\x\6\g\t\i\l\f\9\q\c\5\b\d\q\g\m\p\7\j\s\l\x\x\n\o\c\f\c\p\l\z\q\t\t\x\l\f\x\s\n\k\c\x\z\j\5\t\t\p\z\x\3\k\y\f\5\f\t\t\4\8\3\c\b\h\g\k\3\1\v\h\v\t\9\4\u\s\v\m\f\1\k\f\u\m\e\r\v\l\6\i\9\h\x\2\2\f\3\g\q\2\u\6\q\n\a\q\m\a\p\m\k\q\n\q\o\x\p\c\v\4\q\t\d\h\n\y\e\7\c\o\y\y\r\t\f\i\a\0\r\2\f\t\z\x\v ]] 00:06:01.336 10:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.336 10:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:01.336 [2024-12-09 10:48:54.406783] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:01.336 [2024-12-09 10:48:54.406850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60537 ] 00:06:01.628 [2024-12-09 10:48:54.560260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.628 [2024-12-09 10:48:54.619032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.628 [2024-12-09 10:48:54.663626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.628  [2024-12-09T10:48:55.083Z] Copying: 512/512 [B] (average 500 kBps) 00:06:01.904 00:06:01.904 10:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nhvf3k59uz6tr15gfctkbkbsw7kwmh06whkyezp91z13zxtpuroo86fm72xevuc5dh6bym3atq89emn31o19z57l00wjihajqx9dignesid0097ji52gl5twwxpl4vthsfn6bzq3tp99fffal9fm357osorepnn6kmn27qr7h1jln9wy1hfc88w1v157sg09sj388trhvejmdf4ijavj924p7zk8xuw3rh54fuffzpmhoezere6scfrbqwupv14uopf114hmwnuoo28877mg4uhv7z478xebxzjkeot3lzrrbrz8erk5lka2r8bcuoflb50tzusz56e8f94toqlixxd8yzout75ix7hlx2oefgkg2ux6gtilf9qc5bdqgmp7jslxxnocfcplzqttxlfxsnkcxzj5ttpzx3kyf5ftt483cbhgk31vhvt94usvmf1kfumervl6i9hx22f3gq2u6qnaqmapmkqnqoxpcv4qtdhnye7coyyrtfia0r2ftzxv == \n\h\v\f\3\k\5\9\u\z\6\t\r\1\5\g\f\c\t\k\b\k\b\s\w\7\k\w\m\h\0\6\w\h\k\y\e\z\p\9\1\z\1\3\z\x\t\p\u\r\o\o\8\6\f\m\7\2\x\e\v\u\c\5\d\h\6\b\y\m\3\a\t\q\8\9\e\m\n\3\1\o\1\9\z\5\7\l\0\0\w\j\i\h\a\j\q\x\9\d\i\g\n\e\s\i\d\0\0\9\7\j\i\5\2\g\l\5\t\w\w\x\p\l\4\v\t\h\s\f\n\6\b\z\q\3\t\p\9\9\f\f\f\a\l\9\f\m\3\5\7\o\s\o\r\e\p\n\n\6\k\m\n\2\7\q\r\7\h\1\j\l\n\9\w\y\1\h\f\c\8\8\w\1\v\1\5\7\s\g\0\9\s\j\3\8\8\t\r\h\v\e\j\m\d\f\4\i\j\a\v\j\9\2\4\p\7\z\k\8\x\u\w\3\r\h\5\4\f\u\f\f\z\p\m\h\o\e\z\e\r\e\6\s\c\f\r\b\q\w\u\p\v\1\4\u\o\p\f\1\1\4\h\m\w\n\u\o\o\2\8\8\7\7\m\g\4\u\h\v\7\z\4\7\8\x\e\b\x\z\j\k\e\o\t\3\l\z\r\r\b\r\z\8\e\r\k\5\l\k\a\2\r\8\b\c\u\o\f\l\b\5\0\t\z\u\s\z\5\6\e\8\f\9\4\t\o\q\l\i\x\x\d\8\y\z\o\u\t\7\5\i\x\7\h\l\x\2\o\e\f\g\k\g\2\u\x\6\g\t\i\l\f\9\q\c\5\b\d\q\g\m\p\7\j\s\l\x\x\n\o\c\f\c\p\l\z\q\t\t\x\l\f\x\s\n\k\c\x\z\j\5\t\t\p\z\x\3\k\y\f\5\f\t\t\4\8\3\c\b\h\g\k\3\1\v\h\v\t\9\4\u\s\v\m\f\1\k\f\u\m\e\r\v\l\6\i\9\h\x\2\2\f\3\g\q\2\u\6\q\n\a\q\m\a\p\m\k\q\n\q\o\x\p\c\v\4\q\t\d\h\n\y\e\7\c\o\y\y\r\t\f\i\a\0\r\2\f\t\z\x\v ]] 00:06:01.904 10:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:01.904 10:48:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:01.904 [2024-12-09 10:48:54.952493] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:01.904 [2024-12-09 10:48:54.952658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60546 ] 00:06:02.162 [2024-12-09 10:48:55.099854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.162 [2024-12-09 10:48:55.157153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.162 [2024-12-09 10:48:55.200320] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.162  [2024-12-09T10:48:55.600Z] Copying: 512/512 [B] (average 83 kBps) 00:06:02.421 00:06:02.421 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nhvf3k59uz6tr15gfctkbkbsw7kwmh06whkyezp91z13zxtpuroo86fm72xevuc5dh6bym3atq89emn31o19z57l00wjihajqx9dignesid0097ji52gl5twwxpl4vthsfn6bzq3tp99fffal9fm357osorepnn6kmn27qr7h1jln9wy1hfc88w1v157sg09sj388trhvejmdf4ijavj924p7zk8xuw3rh54fuffzpmhoezere6scfrbqwupv14uopf114hmwnuoo28877mg4uhv7z478xebxzjkeot3lzrrbrz8erk5lka2r8bcuoflb50tzusz56e8f94toqlixxd8yzout75ix7hlx2oefgkg2ux6gtilf9qc5bdqgmp7jslxxnocfcplzqttxlfxsnkcxzj5ttpzx3kyf5ftt483cbhgk31vhvt94usvmf1kfumervl6i9hx22f3gq2u6qnaqmapmkqnqoxpcv4qtdhnye7coyyrtfia0r2ftzxv == \n\h\v\f\3\k\5\9\u\z\6\t\r\1\5\g\f\c\t\k\b\k\b\s\w\7\k\w\m\h\0\6\w\h\k\y\e\z\p\9\1\z\1\3\z\x\t\p\u\r\o\o\8\6\f\m\7\2\x\e\v\u\c\5\d\h\6\b\y\m\3\a\t\q\8\9\e\m\n\3\1\o\1\9\z\5\7\l\0\0\w\j\i\h\a\j\q\x\9\d\i\g\n\e\s\i\d\0\0\9\7\j\i\5\2\g\l\5\t\w\w\x\p\l\4\v\t\h\s\f\n\6\b\z\q\3\t\p\9\9\f\f\f\a\l\9\f\m\3\5\7\o\s\o\r\e\p\n\n\6\k\m\n\2\7\q\r\7\h\1\j\l\n\9\w\y\1\h\f\c\8\8\w\1\v\1\5\7\s\g\0\9\s\j\3\8\8\t\r\h\v\e\j\m\d\f\4\i\j\a\v\j\9\2\4\p\7\z\k\8\x\u\w\3\r\h\5\4\f\u\f\f\z\p\m\h\o\e\z\e\r\e\6\s\c\f\r\b\q\w\u\p\v\1\4\u\o\p\f\1\1\4\h\m\w\n\u\o\o\2\8\8\7\7\m\g\4\u\h\v\7\z\4\7\8\x\e\b\x\z\j\k\e\o\t\3\l\z\r\r\b\r\z\8\e\r\k\5\l\k\a\2\r\8\b\c\u\o\f\l\b\5\0\t\z\u\s\z\5\6\e\8\f\9\4\t\o\q\l\i\x\x\d\8\y\z\o\u\t\7\5\i\x\7\h\l\x\2\o\e\f\g\k\g\2\u\x\6\g\t\i\l\f\9\q\c\5\b\d\q\g\m\p\7\j\s\l\x\x\n\o\c\f\c\p\l\z\q\t\t\x\l\f\x\s\n\k\c\x\z\j\5\t\t\p\z\x\3\k\y\f\5\f\t\t\4\8\3\c\b\h\g\k\3\1\v\h\v\t\9\4\u\s\v\m\f\1\k\f\u\m\e\r\v\l\6\i\9\h\x\2\2\f\3\g\q\2\u\6\q\n\a\q\m\a\p\m\k\q\n\q\o\x\p\c\v\4\q\t\d\h\n\y\e\7\c\o\y\y\r\t\f\i\a\0\r\2\f\t\z\x\v ]] 00:06:02.421 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.421 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:02.421 [2024-12-09 10:48:55.506236] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:02.421 [2024-12-09 10:48:55.506309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60556 ] 00:06:02.681 [2024-12-09 10:48:55.657992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.681 [2024-12-09 10:48:55.715885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.681 [2024-12-09 10:48:55.758728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.681  [2024-12-09T10:48:56.119Z] Copying: 512/512 [B] (average 250 kBps) 00:06:02.940 00:06:02.940 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ nhvf3k59uz6tr15gfctkbkbsw7kwmh06whkyezp91z13zxtpuroo86fm72xevuc5dh6bym3atq89emn31o19z57l00wjihajqx9dignesid0097ji52gl5twwxpl4vthsfn6bzq3tp99fffal9fm357osorepnn6kmn27qr7h1jln9wy1hfc88w1v157sg09sj388trhvejmdf4ijavj924p7zk8xuw3rh54fuffzpmhoezere6scfrbqwupv14uopf114hmwnuoo28877mg4uhv7z478xebxzjkeot3lzrrbrz8erk5lka2r8bcuoflb50tzusz56e8f94toqlixxd8yzout75ix7hlx2oefgkg2ux6gtilf9qc5bdqgmp7jslxxnocfcplzqttxlfxsnkcxzj5ttpzx3kyf5ftt483cbhgk31vhvt94usvmf1kfumervl6i9hx22f3gq2u6qnaqmapmkqnqoxpcv4qtdhnye7coyyrtfia0r2ftzxv == \n\h\v\f\3\k\5\9\u\z\6\t\r\1\5\g\f\c\t\k\b\k\b\s\w\7\k\w\m\h\0\6\w\h\k\y\e\z\p\9\1\z\1\3\z\x\t\p\u\r\o\o\8\6\f\m\7\2\x\e\v\u\c\5\d\h\6\b\y\m\3\a\t\q\8\9\e\m\n\3\1\o\1\9\z\5\7\l\0\0\w\j\i\h\a\j\q\x\9\d\i\g\n\e\s\i\d\0\0\9\7\j\i\5\2\g\l\5\t\w\w\x\p\l\4\v\t\h\s\f\n\6\b\z\q\3\t\p\9\9\f\f\f\a\l\9\f\m\3\5\7\o\s\o\r\e\p\n\n\6\k\m\n\2\7\q\r\7\h\1\j\l\n\9\w\y\1\h\f\c\8\8\w\1\v\1\5\7\s\g\0\9\s\j\3\8\8\t\r\h\v\e\j\m\d\f\4\i\j\a\v\j\9\2\4\p\7\z\k\8\x\u\w\3\r\h\5\4\f\u\f\f\z\p\m\h\o\e\z\e\r\e\6\s\c\f\r\b\q\w\u\p\v\1\4\u\o\p\f\1\1\4\h\m\w\n\u\o\o\2\8\8\7\7\m\g\4\u\h\v\7\z\4\7\8\x\e\b\x\z\j\k\e\o\t\3\l\z\r\r\b\r\z\8\e\r\k\5\l\k\a\2\r\8\b\c\u\o\f\l\b\5\0\t\z\u\s\z\5\6\e\8\f\9\4\t\o\q\l\i\x\x\d\8\y\z\o\u\t\7\5\i\x\7\h\l\x\2\o\e\f\g\k\g\2\u\x\6\g\t\i\l\f\9\q\c\5\b\d\q\g\m\p\7\j\s\l\x\x\n\o\c\f\c\p\l\z\q\t\t\x\l\f\x\s\n\k\c\x\z\j\5\t\t\p\z\x\3\k\y\f\5\f\t\t\4\8\3\c\b\h\g\k\3\1\v\h\v\t\9\4\u\s\v\m\f\1\k\f\u\m\e\r\v\l\6\i\9\h\x\2\2\f\3\g\q\2\u\6\q\n\a\q\m\a\p\m\k\q\n\q\o\x\p\c\v\4\q\t\d\h\n\y\e\7\c\o\y\y\r\t\f\i\a\0\r\2\f\t\z\x\v ]] 00:06:02.940 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:02.940 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:02.940 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:02.940 10:48:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:02.940 10:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:02.940 10:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:02.940 [2024-12-09 10:48:56.059133] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:02.940 [2024-12-09 10:48:56.059306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60565 ] 00:06:03.198 [2024-12-09 10:48:56.211332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.198 [2024-12-09 10:48:56.270024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.198 [2024-12-09 10:48:56.313799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.198  [2024-12-09T10:48:56.635Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.456 00:06:03.456 10:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9a4h6xjhhughpqh0okbbfya81l5apouinh470gn0128pz8azpmut9bkwdg2oqb1lkd7gig4skgbia5k6sgzmgf7wf3on7kgv47q6eryce7hmy9krx4pjzptq57uk6jezf8mv5ju012lvzdlyatlkh1xo3fcv8l5syl2u9eseaff1okueeft21zxtvrdbup8w03typlw0l14hrxiu3uei6b52cmzdu9lvriq7p8cz4xcnwsida569mptt4voyzp1yq6o01t7vbq6ybghdwadzmpzwebfef3h9lpiffe50mfhwys1zsz8x39nddbv1gcmirt25lzndi3vsy9slykk6ff166lfj7kjh90slzoma3suov2u0j0p79rsud4fkllmkr31cjm0s53gdhrj9hu5j9f4s0yhzrhhf76lqrgfzjyw8r0pgii1pyyldlj4p9wsgc56t3w12yfov9miyjjr57cgd9brfgea80l03blrhqx2w83iq7t1l8k1vndc73gd == \x\9\a\4\h\6\x\j\h\h\u\g\h\p\q\h\0\o\k\b\b\f\y\a\8\1\l\5\a\p\o\u\i\n\h\4\7\0\g\n\0\1\2\8\p\z\8\a\z\p\m\u\t\9\b\k\w\d\g\2\o\q\b\1\l\k\d\7\g\i\g\4\s\k\g\b\i\a\5\k\6\s\g\z\m\g\f\7\w\f\3\o\n\7\k\g\v\4\7\q\6\e\r\y\c\e\7\h\m\y\9\k\r\x\4\p\j\z\p\t\q\5\7\u\k\6\j\e\z\f\8\m\v\5\j\u\0\1\2\l\v\z\d\l\y\a\t\l\k\h\1\x\o\3\f\c\v\8\l\5\s\y\l\2\u\9\e\s\e\a\f\f\1\o\k\u\e\e\f\t\2\1\z\x\t\v\r\d\b\u\p\8\w\0\3\t\y\p\l\w\0\l\1\4\h\r\x\i\u\3\u\e\i\6\b\5\2\c\m\z\d\u\9\l\v\r\i\q\7\p\8\c\z\4\x\c\n\w\s\i\d\a\5\6\9\m\p\t\t\4\v\o\y\z\p\1\y\q\6\o\0\1\t\7\v\b\q\6\y\b\g\h\d\w\a\d\z\m\p\z\w\e\b\f\e\f\3\h\9\l\p\i\f\f\e\5\0\m\f\h\w\y\s\1\z\s\z\8\x\3\9\n\d\d\b\v\1\g\c\m\i\r\t\2\5\l\z\n\d\i\3\v\s\y\9\s\l\y\k\k\6\f\f\1\6\6\l\f\j\7\k\j\h\9\0\s\l\z\o\m\a\3\s\u\o\v\2\u\0\j\0\p\7\9\r\s\u\d\4\f\k\l\l\m\k\r\3\1\c\j\m\0\s\5\3\g\d\h\r\j\9\h\u\5\j\9\f\4\s\0\y\h\z\r\h\h\f\7\6\l\q\r\g\f\z\j\y\w\8\r\0\p\g\i\i\1\p\y\y\l\d\l\j\4\p\9\w\s\g\c\5\6\t\3\w\1\2\y\f\o\v\9\m\i\y\j\j\r\5\7\c\g\d\9\b\r\f\g\e\a\8\0\l\0\3\b\l\r\h\q\x\2\w\8\3\i\q\7\t\1\l\8\k\1\v\n\d\c\7\3\g\d ]] 00:06:03.457 10:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.457 10:48:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:03.457 [2024-12-09 10:48:56.586009] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:03.457 [2024-12-09 10:48:56.586083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60575 ] 00:06:03.715 [2024-12-09 10:48:56.738954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.715 [2024-12-09 10:48:56.796305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.715 [2024-12-09 10:48:56.840153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.715  [2024-12-09T10:48:57.152Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.973 00:06:03.973 10:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9a4h6xjhhughpqh0okbbfya81l5apouinh470gn0128pz8azpmut9bkwdg2oqb1lkd7gig4skgbia5k6sgzmgf7wf3on7kgv47q6eryce7hmy9krx4pjzptq57uk6jezf8mv5ju012lvzdlyatlkh1xo3fcv8l5syl2u9eseaff1okueeft21zxtvrdbup8w03typlw0l14hrxiu3uei6b52cmzdu9lvriq7p8cz4xcnwsida569mptt4voyzp1yq6o01t7vbq6ybghdwadzmpzwebfef3h9lpiffe50mfhwys1zsz8x39nddbv1gcmirt25lzndi3vsy9slykk6ff166lfj7kjh90slzoma3suov2u0j0p79rsud4fkllmkr31cjm0s53gdhrj9hu5j9f4s0yhzrhhf76lqrgfzjyw8r0pgii1pyyldlj4p9wsgc56t3w12yfov9miyjjr57cgd9brfgea80l03blrhqx2w83iq7t1l8k1vndc73gd == \x\9\a\4\h\6\x\j\h\h\u\g\h\p\q\h\0\o\k\b\b\f\y\a\8\1\l\5\a\p\o\u\i\n\h\4\7\0\g\n\0\1\2\8\p\z\8\a\z\p\m\u\t\9\b\k\w\d\g\2\o\q\b\1\l\k\d\7\g\i\g\4\s\k\g\b\i\a\5\k\6\s\g\z\m\g\f\7\w\f\3\o\n\7\k\g\v\4\7\q\6\e\r\y\c\e\7\h\m\y\9\k\r\x\4\p\j\z\p\t\q\5\7\u\k\6\j\e\z\f\8\m\v\5\j\u\0\1\2\l\v\z\d\l\y\a\t\l\k\h\1\x\o\3\f\c\v\8\l\5\s\y\l\2\u\9\e\s\e\a\f\f\1\o\k\u\e\e\f\t\2\1\z\x\t\v\r\d\b\u\p\8\w\0\3\t\y\p\l\w\0\l\1\4\h\r\x\i\u\3\u\e\i\6\b\5\2\c\m\z\d\u\9\l\v\r\i\q\7\p\8\c\z\4\x\c\n\w\s\i\d\a\5\6\9\m\p\t\t\4\v\o\y\z\p\1\y\q\6\o\0\1\t\7\v\b\q\6\y\b\g\h\d\w\a\d\z\m\p\z\w\e\b\f\e\f\3\h\9\l\p\i\f\f\e\5\0\m\f\h\w\y\s\1\z\s\z\8\x\3\9\n\d\d\b\v\1\g\c\m\i\r\t\2\5\l\z\n\d\i\3\v\s\y\9\s\l\y\k\k\6\f\f\1\6\6\l\f\j\7\k\j\h\9\0\s\l\z\o\m\a\3\s\u\o\v\2\u\0\j\0\p\7\9\r\s\u\d\4\f\k\l\l\m\k\r\3\1\c\j\m\0\s\5\3\g\d\h\r\j\9\h\u\5\j\9\f\4\s\0\y\h\z\r\h\h\f\7\6\l\q\r\g\f\z\j\y\w\8\r\0\p\g\i\i\1\p\y\y\l\d\l\j\4\p\9\w\s\g\c\5\6\t\3\w\1\2\y\f\o\v\9\m\i\y\j\j\r\5\7\c\g\d\9\b\r\f\g\e\a\8\0\l\0\3\b\l\r\h\q\x\2\w\8\3\i\q\7\t\1\l\8\k\1\v\n\d\c\7\3\g\d ]] 00:06:03.973 10:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:03.973 10:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:03.973 [2024-12-09 10:48:57.134980] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:03.973 [2024-12-09 10:48:57.135051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60584 ] 00:06:04.232 [2024-12-09 10:48:57.286904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.232 [2024-12-09 10:48:57.345080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.232 [2024-12-09 10:48:57.388572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.491  [2024-12-09T10:48:57.670Z] Copying: 512/512 [B] (average 166 kBps) 00:06:04.491 00:06:04.492 10:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9a4h6xjhhughpqh0okbbfya81l5apouinh470gn0128pz8azpmut9bkwdg2oqb1lkd7gig4skgbia5k6sgzmgf7wf3on7kgv47q6eryce7hmy9krx4pjzptq57uk6jezf8mv5ju012lvzdlyatlkh1xo3fcv8l5syl2u9eseaff1okueeft21zxtvrdbup8w03typlw0l14hrxiu3uei6b52cmzdu9lvriq7p8cz4xcnwsida569mptt4voyzp1yq6o01t7vbq6ybghdwadzmpzwebfef3h9lpiffe50mfhwys1zsz8x39nddbv1gcmirt25lzndi3vsy9slykk6ff166lfj7kjh90slzoma3suov2u0j0p79rsud4fkllmkr31cjm0s53gdhrj9hu5j9f4s0yhzrhhf76lqrgfzjyw8r0pgii1pyyldlj4p9wsgc56t3w12yfov9miyjjr57cgd9brfgea80l03blrhqx2w83iq7t1l8k1vndc73gd == \x\9\a\4\h\6\x\j\h\h\u\g\h\p\q\h\0\o\k\b\b\f\y\a\8\1\l\5\a\p\o\u\i\n\h\4\7\0\g\n\0\1\2\8\p\z\8\a\z\p\m\u\t\9\b\k\w\d\g\2\o\q\b\1\l\k\d\7\g\i\g\4\s\k\g\b\i\a\5\k\6\s\g\z\m\g\f\7\w\f\3\o\n\7\k\g\v\4\7\q\6\e\r\y\c\e\7\h\m\y\9\k\r\x\4\p\j\z\p\t\q\5\7\u\k\6\j\e\z\f\8\m\v\5\j\u\0\1\2\l\v\z\d\l\y\a\t\l\k\h\1\x\o\3\f\c\v\8\l\5\s\y\l\2\u\9\e\s\e\a\f\f\1\o\k\u\e\e\f\t\2\1\z\x\t\v\r\d\b\u\p\8\w\0\3\t\y\p\l\w\0\l\1\4\h\r\x\i\u\3\u\e\i\6\b\5\2\c\m\z\d\u\9\l\v\r\i\q\7\p\8\c\z\4\x\c\n\w\s\i\d\a\5\6\9\m\p\t\t\4\v\o\y\z\p\1\y\q\6\o\0\1\t\7\v\b\q\6\y\b\g\h\d\w\a\d\z\m\p\z\w\e\b\f\e\f\3\h\9\l\p\i\f\f\e\5\0\m\f\h\w\y\s\1\z\s\z\8\x\3\9\n\d\d\b\v\1\g\c\m\i\r\t\2\5\l\z\n\d\i\3\v\s\y\9\s\l\y\k\k\6\f\f\1\6\6\l\f\j\7\k\j\h\9\0\s\l\z\o\m\a\3\s\u\o\v\2\u\0\j\0\p\7\9\r\s\u\d\4\f\k\l\l\m\k\r\3\1\c\j\m\0\s\5\3\g\d\h\r\j\9\h\u\5\j\9\f\4\s\0\y\h\z\r\h\h\f\7\6\l\q\r\g\f\z\j\y\w\8\r\0\p\g\i\i\1\p\y\y\l\d\l\j\4\p\9\w\s\g\c\5\6\t\3\w\1\2\y\f\o\v\9\m\i\y\j\j\r\5\7\c\g\d\9\b\r\f\g\e\a\8\0\l\0\3\b\l\r\h\q\x\2\w\8\3\i\q\7\t\1\l\8\k\1\v\n\d\c\7\3\g\d ]] 00:06:04.492 10:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:04.492 10:48:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:04.751 [2024-12-09 10:48:57.685094] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:04.751 [2024-12-09 10:48:57.685250] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60594 ] 00:06:04.751 [2024-12-09 10:48:57.833918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.751 [2024-12-09 10:48:57.890866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.011 [2024-12-09 10:48:57.933028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.011  [2024-12-09T10:48:58.190Z] Copying: 512/512 [B] (average 125 kBps) 00:06:05.011 00:06:05.011 10:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ x9a4h6xjhhughpqh0okbbfya81l5apouinh470gn0128pz8azpmut9bkwdg2oqb1lkd7gig4skgbia5k6sgzmgf7wf3on7kgv47q6eryce7hmy9krx4pjzptq57uk6jezf8mv5ju012lvzdlyatlkh1xo3fcv8l5syl2u9eseaff1okueeft21zxtvrdbup8w03typlw0l14hrxiu3uei6b52cmzdu9lvriq7p8cz4xcnwsida569mptt4voyzp1yq6o01t7vbq6ybghdwadzmpzwebfef3h9lpiffe50mfhwys1zsz8x39nddbv1gcmirt25lzndi3vsy9slykk6ff166lfj7kjh90slzoma3suov2u0j0p79rsud4fkllmkr31cjm0s53gdhrj9hu5j9f4s0yhzrhhf76lqrgfzjyw8r0pgii1pyyldlj4p9wsgc56t3w12yfov9miyjjr57cgd9brfgea80l03blrhqx2w83iq7t1l8k1vndc73gd == \x\9\a\4\h\6\x\j\h\h\u\g\h\p\q\h\0\o\k\b\b\f\y\a\8\1\l\5\a\p\o\u\i\n\h\4\7\0\g\n\0\1\2\8\p\z\8\a\z\p\m\u\t\9\b\k\w\d\g\2\o\q\b\1\l\k\d\7\g\i\g\4\s\k\g\b\i\a\5\k\6\s\g\z\m\g\f\7\w\f\3\o\n\7\k\g\v\4\7\q\6\e\r\y\c\e\7\h\m\y\9\k\r\x\4\p\j\z\p\t\q\5\7\u\k\6\j\e\z\f\8\m\v\5\j\u\0\1\2\l\v\z\d\l\y\a\t\l\k\h\1\x\o\3\f\c\v\8\l\5\s\y\l\2\u\9\e\s\e\a\f\f\1\o\k\u\e\e\f\t\2\1\z\x\t\v\r\d\b\u\p\8\w\0\3\t\y\p\l\w\0\l\1\4\h\r\x\i\u\3\u\e\i\6\b\5\2\c\m\z\d\u\9\l\v\r\i\q\7\p\8\c\z\4\x\c\n\w\s\i\d\a\5\6\9\m\p\t\t\4\v\o\y\z\p\1\y\q\6\o\0\1\t\7\v\b\q\6\y\b\g\h\d\w\a\d\z\m\p\z\w\e\b\f\e\f\3\h\9\l\p\i\f\f\e\5\0\m\f\h\w\y\s\1\z\s\z\8\x\3\9\n\d\d\b\v\1\g\c\m\i\r\t\2\5\l\z\n\d\i\3\v\s\y\9\s\l\y\k\k\6\f\f\1\6\6\l\f\j\7\k\j\h\9\0\s\l\z\o\m\a\3\s\u\o\v\2\u\0\j\0\p\7\9\r\s\u\d\4\f\k\l\l\m\k\r\3\1\c\j\m\0\s\5\3\g\d\h\r\j\9\h\u\5\j\9\f\4\s\0\y\h\z\r\h\h\f\7\6\l\q\r\g\f\z\j\y\w\8\r\0\p\g\i\i\1\p\y\y\l\d\l\j\4\p\9\w\s\g\c\5\6\t\3\w\1\2\y\f\o\v\9\m\i\y\j\j\r\5\7\c\g\d\9\b\r\f\g\e\a\8\0\l\0\3\b\l\r\h\q\x\2\w\8\3\i\q\7\t\1\l\8\k\1\v\n\d\c\7\3\g\d ]] 00:06:05.011 00:06:05.011 real 0m4.371s 00:06:05.011 user 0m2.587s 00:06:05.011 sys 0m1.825s 00:06:05.011 ************************************ 00:06:05.011 END TEST dd_flags_misc 00:06:05.011 ************************************ 00:06:05.011 10:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.011 10:48:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:05.270 * Second test run, disabling liburing, forcing AIO 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:05.270 ************************************ 00:06:05.270 START TEST dd_flag_append_forced_aio 00:06:05.270 ************************************ 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=jv2ivv5azgjak6wiafcrf0ihlefrqp8g 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:05.270 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=u8vimolp0bvr64lsilj1170yogoypk4r 00:06:05.271 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s jv2ivv5azgjak6wiafcrf0ihlefrqp8g 00:06:05.271 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s u8vimolp0bvr64lsilj1170yogoypk4r 00:06:05.271 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:05.271 [2024-12-09 10:48:58.298523] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:05.271 [2024-12-09 10:48:58.298673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60622 ] 00:06:05.529 [2024-12-09 10:48:58.484902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.529 [2024-12-09 10:48:58.539238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.529 [2024-12-09 10:48:58.581483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.529  [2024-12-09T10:48:58.967Z] Copying: 32/32 [B] (average 31 kBps) 00:06:05.788 00:06:05.788 ************************************ 00:06:05.788 END TEST dd_flag_append_forced_aio 00:06:05.788 ************************************ 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ u8vimolp0bvr64lsilj1170yogoypk4rjv2ivv5azgjak6wiafcrf0ihlefrqp8g == \u\8\v\i\m\o\l\p\0\b\v\r\6\4\l\s\i\l\j\1\1\7\0\y\o\g\o\y\p\k\4\r\j\v\2\i\v\v\5\a\z\g\j\a\k\6\w\i\a\f\c\r\f\0\i\h\l\e\f\r\q\p\8\g ]] 00:06:05.788 00:06:05.788 real 0m0.598s 00:06:05.788 user 0m0.358s 00:06:05.788 sys 0m0.121s 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:05.788 ************************************ 00:06:05.788 START TEST dd_flag_directory_forced_aio 00:06:05.788 ************************************ 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.788 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:05.789 10:48:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.789 [2024-12-09 10:48:58.946623] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:05.789 [2024-12-09 10:48:58.946712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60654 ] 00:06:06.050 [2024-12-09 10:48:59.107881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.050 [2024-12-09 10:48:59.164467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.050 [2024-12-09 10:48:59.207185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.333 [2024-12-09 10:48:59.239446] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:06.333 [2024-12-09 10:48:59.239499] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:06.333 [2024-12-09 10:48:59.239508] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.333 [2024-12-09 10:48:59.336655] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:06.333 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:06.592 [2024-12-09 10:48:59.503388] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:06.592 [2024-12-09 10:48:59.503528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60664 ] 00:06:06.592 [2024-12-09 10:48:59.656507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.592 [2024-12-09 10:48:59.710501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.592 [2024-12-09 10:48:59.754262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.851 [2024-12-09 10:48:59.785864] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:06.851 [2024-12-09 10:48:59.785912] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:06.851 [2024-12-09 10:48:59.785922] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.851 [2024-12-09 10:48:59.883800] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.851 00:06:06.851 real 0m1.102s 00:06:06.851 user 0m0.647s 00:06:06.851 sys 0m0.245s 00:06:06.851 ************************************ 00:06:06.851 END TEST dd_flag_directory_forced_aio 00:06:06.851 ************************************ 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.851 10:48:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:07.110 ************************************ 00:06:07.110 START TEST dd_flag_nofollow_forced_aio 00:06:07.110 ************************************ 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.110 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:07.110 [2024-12-09 10:49:00.124065] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:07.110 [2024-12-09 10:49:00.124140] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60692 ] 00:06:07.110 [2024-12-09 10:49:00.259728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.369 [2024-12-09 10:49:00.316214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.369 [2024-12-09 10:49:00.358663] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.369 [2024-12-09 10:49:00.394390] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:07.369 [2024-12-09 10:49:00.394441] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:07.369 [2024-12-09 10:49:00.394454] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.369 [2024-12-09 10:49:00.493076] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:07.627 10:49:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:07.627 [2024-12-09 10:49:00.660376] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:07.627 [2024-12-09 10:49:00.660454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60702 ] 00:06:07.886 [2024-12-09 10:49:00.808762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.886 [2024-12-09 10:49:00.866267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.886 [2024-12-09 10:49:00.909474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.886 [2024-12-09 10:49:00.941339] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:07.886 [2024-12-09 10:49:00.941436] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:07.886 [2024-12-09 10:49:00.941450] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:07.886 [2024-12-09 10:49:01.039178] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.146 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.146 [2024-12-09 10:49:01.205641] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:08.146 [2024-12-09 10:49:01.206111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60709 ] 00:06:08.404 [2024-12-09 10:49:01.359963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.404 [2024-12-09 10:49:01.417831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.404 [2024-12-09 10:49:01.460931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.404  [2024-12-09T10:49:01.840Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.661 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ kn926hdhl3hvsfkkerxjgfgbzq4z48c3t6gs7kgi4fak853s9r1uwe2h3p61cw13v4ttlqn6037mydzmxr9j1s659u3eswr1ltsoks6h3wm2dqn4m48b2qadwmgvvxlvvq1g07jg8655i4rhvyll61c7bwyjmb6zecmx4lacrvavpvywma05kjfn4rgshfyqoji7u4y9jzd2f9lustel5nyas6yzjbbzvn9y4g9aiah5shcy80j1h3npidk9yqhp06jfvx7utk3ie6kk8g033ye9up3coq31sbzduix1p56b2c09qsmt0m4cmmc9aacwf7cqp5sfh7njs2v6b0361qllvyis56hh5l41io9s10e1iccube37x6yfl7md0b6njk1in1b21fh3htozf6fog4dtuwhj7bzbj911vbhc8ij71zmh9kal95p4egl7lt2zabyicgvyabzhqxpbc97vnxnso60j5329hkw0ll6weh04og1s4hybfbulp4dhn58p == \k\n\9\2\6\h\d\h\l\3\h\v\s\f\k\k\e\r\x\j\g\f\g\b\z\q\4\z\4\8\c\3\t\6\g\s\7\k\g\i\4\f\a\k\8\5\3\s\9\r\1\u\w\e\2\h\3\p\6\1\c\w\1\3\v\4\t\t\l\q\n\6\0\3\7\m\y\d\z\m\x\r\9\j\1\s\6\5\9\u\3\e\s\w\r\1\l\t\s\o\k\s\6\h\3\w\m\2\d\q\n\4\m\4\8\b\2\q\a\d\w\m\g\v\v\x\l\v\v\q\1\g\0\7\j\g\8\6\5\5\i\4\r\h\v\y\l\l\6\1\c\7\b\w\y\j\m\b\6\z\e\c\m\x\4\l\a\c\r\v\a\v\p\v\y\w\m\a\0\5\k\j\f\n\4\r\g\s\h\f\y\q\o\j\i\7\u\4\y\9\j\z\d\2\f\9\l\u\s\t\e\l\5\n\y\a\s\6\y\z\j\b\b\z\v\n\9\y\4\g\9\a\i\a\h\5\s\h\c\y\8\0\j\1\h\3\n\p\i\d\k\9\y\q\h\p\0\6\j\f\v\x\7\u\t\k\3\i\e\6\k\k\8\g\0\3\3\y\e\9\u\p\3\c\o\q\3\1\s\b\z\d\u\i\x\1\p\5\6\b\2\c\0\9\q\s\m\t\0\m\4\c\m\m\c\9\a\a\c\w\f\7\c\q\p\5\s\f\h\7\n\j\s\2\v\6\b\0\3\6\1\q\l\l\v\y\i\s\5\6\h\h\5\l\4\1\i\o\9\s\1\0\e\1\i\c\c\u\b\e\3\7\x\6\y\f\l\7\m\d\0\b\6\n\j\k\1\i\n\1\b\2\1\f\h\3\h\t\o\z\f\6\f\o\g\4\d\t\u\w\h\j\7\b\z\b\j\9\1\1\v\b\h\c\8\i\j\7\1\z\m\h\9\k\a\l\9\5\p\4\e\g\l\7\l\t\2\z\a\b\y\i\c\g\v\y\a\b\z\h\q\x\p\b\c\9\7\v\n\x\n\s\o\6\0\j\5\3\2\9\h\k\w\0\l\l\6\w\e\h\0\4\o\g\1\s\4\h\y\b\f\b\u\l\p\4\d\h\n\5\8\p ]] 00:06:08.662 00:06:08.662 real 0m1.656s 00:06:08.662 user 0m0.962s 00:06:08.662 sys 0m0.365s 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.662 ************************************ 00:06:08.662 END TEST dd_flag_nofollow_forced_aio 00:06:08.662 ************************************ 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.662 ************************************ 00:06:08.662 START TEST dd_flag_noatime_forced_aio 00:06:08.662 ************************************ 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733741341 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733741341 00:06:08.662 10:49:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:10.040 10:49:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.040 [2024-12-09 10:49:02.859060] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:10.040 [2024-12-09 10:49:02.859137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60750 ] 00:06:10.040 [2024-12-09 10:49:03.011052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.040 [2024-12-09 10:49:03.070005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.040 [2024-12-09 10:49:03.113465] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.040  [2024-12-09T10:49:03.478Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.299 00:06:10.299 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.299 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733741341 )) 00:06:10.299 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.299 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733741341 )) 00:06:10.299 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.299 [2024-12-09 10:49:03.432707] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:10.299 [2024-12-09 10:49:03.432933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60761 ] 00:06:10.558 [2024-12-09 10:49:03.588864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.558 [2024-12-09 10:49:03.646641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.558 [2024-12-09 10:49:03.689660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.558  [2024-12-09T10:49:03.995Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.816 00:06:10.816 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.816 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733741343 )) 00:06:10.816 00:06:10.816 real 0m2.172s 00:06:10.816 user 0m0.662s 00:06:10.816 sys 0m0.271s 00:06:10.816 ************************************ 00:06:10.816 END TEST dd_flag_noatime_forced_aio 00:06:10.816 ************************************ 00:06:10.816 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.816 10:49:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:10.816 10:49:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:10.816 10:49:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.075 10:49:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.075 10:49:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:11.075 ************************************ 00:06:11.075 START TEST dd_flags_misc_forced_aio 00:06:11.075 ************************************ 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.075 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:11.075 [2024-12-09 10:49:04.072911] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:11.075 [2024-12-09 10:49:04.072981] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60788 ] 00:06:11.075 [2024-12-09 10:49:04.226931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.333 [2024-12-09 10:49:04.281584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.333 [2024-12-09 10:49:04.324454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.333  [2024-12-09T10:49:04.771Z] Copying: 512/512 [B] (average 500 kBps) 00:06:11.592 00:06:11.592 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s1iw3zn76j71fa26av1wyjtf2hgq0tuwcecrem5av9iuf0mj98god51b0f9sr3xozetsadx854srkdyckq6ns59f7pvir1ivhwqba584kta8zqls0140esktllmd9brejqqb1u4h2kr13b615spdgce1b7ugijqac35n7461xc7hkr46x5pv4loktj22sydy5llhchx7c54gj4s2igfn2ilnoouc9t0oo0lwtcjv6w13s5uen4d0unnmrrbayljdi4wafv6amh51s3iajv31o3izn9hqhgs6ksrjn8fhr7toa6rnfd2emtwpm94azyjsjozi4zpgv6cy8eo7sfjfcgil2mk2jbui4iraavul1sn3zyoh1qeh9o9qazskxa8rsvn74cnnjrw41rywx4z64d8f7r1frmgfg317exqkz8epnaglvx8zk9oyalks0exdgtst3d07031unxw4kmdqv2wu0tn2ukmig2k7mkbdd3i39ii8kwg2qjuvuj8f6qct == \s\1\i\w\3\z\n\7\6\j\7\1\f\a\2\6\a\v\1\w\y\j\t\f\2\h\g\q\0\t\u\w\c\e\c\r\e\m\5\a\v\9\i\u\f\0\m\j\9\8\g\o\d\5\1\b\0\f\9\s\r\3\x\o\z\e\t\s\a\d\x\8\5\4\s\r\k\d\y\c\k\q\6\n\s\5\9\f\7\p\v\i\r\1\i\v\h\w\q\b\a\5\8\4\k\t\a\8\z\q\l\s\0\1\4\0\e\s\k\t\l\l\m\d\9\b\r\e\j\q\q\b\1\u\4\h\2\k\r\1\3\b\6\1\5\s\p\d\g\c\e\1\b\7\u\g\i\j\q\a\c\3\5\n\7\4\6\1\x\c\7\h\k\r\4\6\x\5\p\v\4\l\o\k\t\j\2\2\s\y\d\y\5\l\l\h\c\h\x\7\c\5\4\g\j\4\s\2\i\g\f\n\2\i\l\n\o\o\u\c\9\t\0\o\o\0\l\w\t\c\j\v\6\w\1\3\s\5\u\e\n\4\d\0\u\n\n\m\r\r\b\a\y\l\j\d\i\4\w\a\f\v\6\a\m\h\5\1\s\3\i\a\j\v\3\1\o\3\i\z\n\9\h\q\h\g\s\6\k\s\r\j\n\8\f\h\r\7\t\o\a\6\r\n\f\d\2\e\m\t\w\p\m\9\4\a\z\y\j\s\j\o\z\i\4\z\p\g\v\6\c\y\8\e\o\7\s\f\j\f\c\g\i\l\2\m\k\2\j\b\u\i\4\i\r\a\a\v\u\l\1\s\n\3\z\y\o\h\1\q\e\h\9\o\9\q\a\z\s\k\x\a\8\r\s\v\n\7\4\c\n\n\j\r\w\4\1\r\y\w\x\4\z\6\4\d\8\f\7\r\1\f\r\m\g\f\g\3\1\7\e\x\q\k\z\8\e\p\n\a\g\l\v\x\8\z\k\9\o\y\a\l\k\s\0\e\x\d\g\t\s\t\3\d\0\7\0\3\1\u\n\x\w\4\k\m\d\q\v\2\w\u\0\t\n\2\u\k\m\i\g\2\k\7\m\k\b\d\d\3\i\3\9\i\i\8\k\w\g\2\q\j\u\v\u\j\8\f\6\q\c\t ]] 00:06:11.592 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:11.592 10:49:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:11.592 [2024-12-09 10:49:04.631973] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:11.592 [2024-12-09 10:49:04.632053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 00:06:11.851 [2024-12-09 10:49:04.784332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.851 [2024-12-09 10:49:04.840662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.851 [2024-12-09 10:49:04.883456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.851  [2024-12-09T10:49:05.286Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.107 00:06:12.107 10:49:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s1iw3zn76j71fa26av1wyjtf2hgq0tuwcecrem5av9iuf0mj98god51b0f9sr3xozetsadx854srkdyckq6ns59f7pvir1ivhwqba584kta8zqls0140esktllmd9brejqqb1u4h2kr13b615spdgce1b7ugijqac35n7461xc7hkr46x5pv4loktj22sydy5llhchx7c54gj4s2igfn2ilnoouc9t0oo0lwtcjv6w13s5uen4d0unnmrrbayljdi4wafv6amh51s3iajv31o3izn9hqhgs6ksrjn8fhr7toa6rnfd2emtwpm94azyjsjozi4zpgv6cy8eo7sfjfcgil2mk2jbui4iraavul1sn3zyoh1qeh9o9qazskxa8rsvn74cnnjrw41rywx4z64d8f7r1frmgfg317exqkz8epnaglvx8zk9oyalks0exdgtst3d07031unxw4kmdqv2wu0tn2ukmig2k7mkbdd3i39ii8kwg2qjuvuj8f6qct == \s\1\i\w\3\z\n\7\6\j\7\1\f\a\2\6\a\v\1\w\y\j\t\f\2\h\g\q\0\t\u\w\c\e\c\r\e\m\5\a\v\9\i\u\f\0\m\j\9\8\g\o\d\5\1\b\0\f\9\s\r\3\x\o\z\e\t\s\a\d\x\8\5\4\s\r\k\d\y\c\k\q\6\n\s\5\9\f\7\p\v\i\r\1\i\v\h\w\q\b\a\5\8\4\k\t\a\8\z\q\l\s\0\1\4\0\e\s\k\t\l\l\m\d\9\b\r\e\j\q\q\b\1\u\4\h\2\k\r\1\3\b\6\1\5\s\p\d\g\c\e\1\b\7\u\g\i\j\q\a\c\3\5\n\7\4\6\1\x\c\7\h\k\r\4\6\x\5\p\v\4\l\o\k\t\j\2\2\s\y\d\y\5\l\l\h\c\h\x\7\c\5\4\g\j\4\s\2\i\g\f\n\2\i\l\n\o\o\u\c\9\t\0\o\o\0\l\w\t\c\j\v\6\w\1\3\s\5\u\e\n\4\d\0\u\n\n\m\r\r\b\a\y\l\j\d\i\4\w\a\f\v\6\a\m\h\5\1\s\3\i\a\j\v\3\1\o\3\i\z\n\9\h\q\h\g\s\6\k\s\r\j\n\8\f\h\r\7\t\o\a\6\r\n\f\d\2\e\m\t\w\p\m\9\4\a\z\y\j\s\j\o\z\i\4\z\p\g\v\6\c\y\8\e\o\7\s\f\j\f\c\g\i\l\2\m\k\2\j\b\u\i\4\i\r\a\a\v\u\l\1\s\n\3\z\y\o\h\1\q\e\h\9\o\9\q\a\z\s\k\x\a\8\r\s\v\n\7\4\c\n\n\j\r\w\4\1\r\y\w\x\4\z\6\4\d\8\f\7\r\1\f\r\m\g\f\g\3\1\7\e\x\q\k\z\8\e\p\n\a\g\l\v\x\8\z\k\9\o\y\a\l\k\s\0\e\x\d\g\t\s\t\3\d\0\7\0\3\1\u\n\x\w\4\k\m\d\q\v\2\w\u\0\t\n\2\u\k\m\i\g\2\k\7\m\k\b\d\d\3\i\3\9\i\i\8\k\w\g\2\q\j\u\v\u\j\8\f\6\q\c\t ]] 00:06:12.107 10:49:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.107 10:49:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:12.107 [2024-12-09 10:49:05.192365] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:12.107 [2024-12-09 10:49:05.192445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60803 ] 00:06:12.365 [2024-12-09 10:49:05.351189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.365 [2024-12-09 10:49:05.406831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.365 [2024-12-09 10:49:05.449765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.365  [2024-12-09T10:49:05.801Z] Copying: 512/512 [B] (average 125 kBps) 00:06:12.622 00:06:12.622 10:49:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s1iw3zn76j71fa26av1wyjtf2hgq0tuwcecrem5av9iuf0mj98god51b0f9sr3xozetsadx854srkdyckq6ns59f7pvir1ivhwqba584kta8zqls0140esktllmd9brejqqb1u4h2kr13b615spdgce1b7ugijqac35n7461xc7hkr46x5pv4loktj22sydy5llhchx7c54gj4s2igfn2ilnoouc9t0oo0lwtcjv6w13s5uen4d0unnmrrbayljdi4wafv6amh51s3iajv31o3izn9hqhgs6ksrjn8fhr7toa6rnfd2emtwpm94azyjsjozi4zpgv6cy8eo7sfjfcgil2mk2jbui4iraavul1sn3zyoh1qeh9o9qazskxa8rsvn74cnnjrw41rywx4z64d8f7r1frmgfg317exqkz8epnaglvx8zk9oyalks0exdgtst3d07031unxw4kmdqv2wu0tn2ukmig2k7mkbdd3i39ii8kwg2qjuvuj8f6qct == \s\1\i\w\3\z\n\7\6\j\7\1\f\a\2\6\a\v\1\w\y\j\t\f\2\h\g\q\0\t\u\w\c\e\c\r\e\m\5\a\v\9\i\u\f\0\m\j\9\8\g\o\d\5\1\b\0\f\9\s\r\3\x\o\z\e\t\s\a\d\x\8\5\4\s\r\k\d\y\c\k\q\6\n\s\5\9\f\7\p\v\i\r\1\i\v\h\w\q\b\a\5\8\4\k\t\a\8\z\q\l\s\0\1\4\0\e\s\k\t\l\l\m\d\9\b\r\e\j\q\q\b\1\u\4\h\2\k\r\1\3\b\6\1\5\s\p\d\g\c\e\1\b\7\u\g\i\j\q\a\c\3\5\n\7\4\6\1\x\c\7\h\k\r\4\6\x\5\p\v\4\l\o\k\t\j\2\2\s\y\d\y\5\l\l\h\c\h\x\7\c\5\4\g\j\4\s\2\i\g\f\n\2\i\l\n\o\o\u\c\9\t\0\o\o\0\l\w\t\c\j\v\6\w\1\3\s\5\u\e\n\4\d\0\u\n\n\m\r\r\b\a\y\l\j\d\i\4\w\a\f\v\6\a\m\h\5\1\s\3\i\a\j\v\3\1\o\3\i\z\n\9\h\q\h\g\s\6\k\s\r\j\n\8\f\h\r\7\t\o\a\6\r\n\f\d\2\e\m\t\w\p\m\9\4\a\z\y\j\s\j\o\z\i\4\z\p\g\v\6\c\y\8\e\o\7\s\f\j\f\c\g\i\l\2\m\k\2\j\b\u\i\4\i\r\a\a\v\u\l\1\s\n\3\z\y\o\h\1\q\e\h\9\o\9\q\a\z\s\k\x\a\8\r\s\v\n\7\4\c\n\n\j\r\w\4\1\r\y\w\x\4\z\6\4\d\8\f\7\r\1\f\r\m\g\f\g\3\1\7\e\x\q\k\z\8\e\p\n\a\g\l\v\x\8\z\k\9\o\y\a\l\k\s\0\e\x\d\g\t\s\t\3\d\0\7\0\3\1\u\n\x\w\4\k\m\d\q\v\2\w\u\0\t\n\2\u\k\m\i\g\2\k\7\m\k\b\d\d\3\i\3\9\i\i\8\k\w\g\2\q\j\u\v\u\j\8\f\6\q\c\t ]] 00:06:12.622 10:49:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.623 10:49:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:12.623 [2024-12-09 10:49:05.754662] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:12.623 [2024-12-09 10:49:05.754763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60816 ] 00:06:12.897 [2024-12-09 10:49:05.915327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.897 [2024-12-09 10:49:05.972868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.897 [2024-12-09 10:49:06.015427] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.897  [2024-12-09T10:49:06.351Z] Copying: 512/512 [B] (average 250 kBps) 00:06:13.172 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ s1iw3zn76j71fa26av1wyjtf2hgq0tuwcecrem5av9iuf0mj98god51b0f9sr3xozetsadx854srkdyckq6ns59f7pvir1ivhwqba584kta8zqls0140esktllmd9brejqqb1u4h2kr13b615spdgce1b7ugijqac35n7461xc7hkr46x5pv4loktj22sydy5llhchx7c54gj4s2igfn2ilnoouc9t0oo0lwtcjv6w13s5uen4d0unnmrrbayljdi4wafv6amh51s3iajv31o3izn9hqhgs6ksrjn8fhr7toa6rnfd2emtwpm94azyjsjozi4zpgv6cy8eo7sfjfcgil2mk2jbui4iraavul1sn3zyoh1qeh9o9qazskxa8rsvn74cnnjrw41rywx4z64d8f7r1frmgfg317exqkz8epnaglvx8zk9oyalks0exdgtst3d07031unxw4kmdqv2wu0tn2ukmig2k7mkbdd3i39ii8kwg2qjuvuj8f6qct == \s\1\i\w\3\z\n\7\6\j\7\1\f\a\2\6\a\v\1\w\y\j\t\f\2\h\g\q\0\t\u\w\c\e\c\r\e\m\5\a\v\9\i\u\f\0\m\j\9\8\g\o\d\5\1\b\0\f\9\s\r\3\x\o\z\e\t\s\a\d\x\8\5\4\s\r\k\d\y\c\k\q\6\n\s\5\9\f\7\p\v\i\r\1\i\v\h\w\q\b\a\5\8\4\k\t\a\8\z\q\l\s\0\1\4\0\e\s\k\t\l\l\m\d\9\b\r\e\j\q\q\b\1\u\4\h\2\k\r\1\3\b\6\1\5\s\p\d\g\c\e\1\b\7\u\g\i\j\q\a\c\3\5\n\7\4\6\1\x\c\7\h\k\r\4\6\x\5\p\v\4\l\o\k\t\j\2\2\s\y\d\y\5\l\l\h\c\h\x\7\c\5\4\g\j\4\s\2\i\g\f\n\2\i\l\n\o\o\u\c\9\t\0\o\o\0\l\w\t\c\j\v\6\w\1\3\s\5\u\e\n\4\d\0\u\n\n\m\r\r\b\a\y\l\j\d\i\4\w\a\f\v\6\a\m\h\5\1\s\3\i\a\j\v\3\1\o\3\i\z\n\9\h\q\h\g\s\6\k\s\r\j\n\8\f\h\r\7\t\o\a\6\r\n\f\d\2\e\m\t\w\p\m\9\4\a\z\y\j\s\j\o\z\i\4\z\p\g\v\6\c\y\8\e\o\7\s\f\j\f\c\g\i\l\2\m\k\2\j\b\u\i\4\i\r\a\a\v\u\l\1\s\n\3\z\y\o\h\1\q\e\h\9\o\9\q\a\z\s\k\x\a\8\r\s\v\n\7\4\c\n\n\j\r\w\4\1\r\y\w\x\4\z\6\4\d\8\f\7\r\1\f\r\m\g\f\g\3\1\7\e\x\q\k\z\8\e\p\n\a\g\l\v\x\8\z\k\9\o\y\a\l\k\s\0\e\x\d\g\t\s\t\3\d\0\7\0\3\1\u\n\x\w\4\k\m\d\q\v\2\w\u\0\t\n\2\u\k\m\i\g\2\k\7\m\k\b\d\d\3\i\3\9\i\i\8\k\w\g\2\q\j\u\v\u\j\8\f\6\q\c\t ]] 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.172 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:13.172 [2024-12-09 10:49:06.344554] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:13.172 [2024-12-09 10:49:06.344651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60823 ] 00:06:13.431 [2024-12-09 10:49:06.504157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.431 [2024-12-09 10:49:06.562585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.690 [2024-12-09 10:49:06.613307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.690  [2024-12-09T10:49:07.127Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.948 00:06:13.948 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2rrr8p6neblnr1heh98kaqop96qm84y2banwhs1qw1poi2brw0zqhx95zl0tyckfkfe8rvegq926amj8g1py3714bi7t380o6g0de19msfmham2p3pbrt1bvtisbdvtsc7oxrgjjttjiruoxelhp5o4sopftw2x2z8x5sjr1k61k1a69qatnx3gxs3j4z9uvzsw2y7izer7fatda6za94dzt9s66hga8d5ap254dlx46s44ardieipzfhhm045iqkktpqwyln3y3zoutcyb4ans3kr6o5a022zff30i8xab6g66vjm7ixs2koavs3pmcwhh40tm9hpspew4f3hdcxn6c5jlvmvubxxf3n6mpx0pfnw46n0aqv8qwc1x6eu93ga2z20tff4vwntgcbvnoezagsgj68c1i22fetetfjj7jy45yratuqhdrpvkgxvzurhgspewhc4nxmlm04ardv37d0oqvd4whewfjbipwx5mqmog4ck7w1fh3hwm7gnux == \2\r\r\r\8\p\6\n\e\b\l\n\r\1\h\e\h\9\8\k\a\q\o\p\9\6\q\m\8\4\y\2\b\a\n\w\h\s\1\q\w\1\p\o\i\2\b\r\w\0\z\q\h\x\9\5\z\l\0\t\y\c\k\f\k\f\e\8\r\v\e\g\q\9\2\6\a\m\j\8\g\1\p\y\3\7\1\4\b\i\7\t\3\8\0\o\6\g\0\d\e\1\9\m\s\f\m\h\a\m\2\p\3\p\b\r\t\1\b\v\t\i\s\b\d\v\t\s\c\7\o\x\r\g\j\j\t\t\j\i\r\u\o\x\e\l\h\p\5\o\4\s\o\p\f\t\w\2\x\2\z\8\x\5\s\j\r\1\k\6\1\k\1\a\6\9\q\a\t\n\x\3\g\x\s\3\j\4\z\9\u\v\z\s\w\2\y\7\i\z\e\r\7\f\a\t\d\a\6\z\a\9\4\d\z\t\9\s\6\6\h\g\a\8\d\5\a\p\2\5\4\d\l\x\4\6\s\4\4\a\r\d\i\e\i\p\z\f\h\h\m\0\4\5\i\q\k\k\t\p\q\w\y\l\n\3\y\3\z\o\u\t\c\y\b\4\a\n\s\3\k\r\6\o\5\a\0\2\2\z\f\f\3\0\i\8\x\a\b\6\g\6\6\v\j\m\7\i\x\s\2\k\o\a\v\s\3\p\m\c\w\h\h\4\0\t\m\9\h\p\s\p\e\w\4\f\3\h\d\c\x\n\6\c\5\j\l\v\m\v\u\b\x\x\f\3\n\6\m\p\x\0\p\f\n\w\4\6\n\0\a\q\v\8\q\w\c\1\x\6\e\u\9\3\g\a\2\z\2\0\t\f\f\4\v\w\n\t\g\c\b\v\n\o\e\z\a\g\s\g\j\6\8\c\1\i\2\2\f\e\t\e\t\f\j\j\7\j\y\4\5\y\r\a\t\u\q\h\d\r\p\v\k\g\x\v\z\u\r\h\g\s\p\e\w\h\c\4\n\x\m\l\m\0\4\a\r\d\v\3\7\d\0\o\q\v\d\4\w\h\e\w\f\j\b\i\p\w\x\5\m\q\m\o\g\4\c\k\7\w\1\f\h\3\h\w\m\7\g\n\u\x ]] 00:06:13.948 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.948 10:49:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:13.948 [2024-12-09 10:49:06.954307] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:13.948 [2024-12-09 10:49:06.954396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60831 ] 00:06:13.948 [2024-12-09 10:49:07.115618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.207 [2024-12-09 10:49:07.171806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.207 [2024-12-09 10:49:07.214378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.207  [2024-12-09T10:49:07.644Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.465 00:06:14.465 10:49:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2rrr8p6neblnr1heh98kaqop96qm84y2banwhs1qw1poi2brw0zqhx95zl0tyckfkfe8rvegq926amj8g1py3714bi7t380o6g0de19msfmham2p3pbrt1bvtisbdvtsc7oxrgjjttjiruoxelhp5o4sopftw2x2z8x5sjr1k61k1a69qatnx3gxs3j4z9uvzsw2y7izer7fatda6za94dzt9s66hga8d5ap254dlx46s44ardieipzfhhm045iqkktpqwyln3y3zoutcyb4ans3kr6o5a022zff30i8xab6g66vjm7ixs2koavs3pmcwhh40tm9hpspew4f3hdcxn6c5jlvmvubxxf3n6mpx0pfnw46n0aqv8qwc1x6eu93ga2z20tff4vwntgcbvnoezagsgj68c1i22fetetfjj7jy45yratuqhdrpvkgxvzurhgspewhc4nxmlm04ardv37d0oqvd4whewfjbipwx5mqmog4ck7w1fh3hwm7gnux == \2\r\r\r\8\p\6\n\e\b\l\n\r\1\h\e\h\9\8\k\a\q\o\p\9\6\q\m\8\4\y\2\b\a\n\w\h\s\1\q\w\1\p\o\i\2\b\r\w\0\z\q\h\x\9\5\z\l\0\t\y\c\k\f\k\f\e\8\r\v\e\g\q\9\2\6\a\m\j\8\g\1\p\y\3\7\1\4\b\i\7\t\3\8\0\o\6\g\0\d\e\1\9\m\s\f\m\h\a\m\2\p\3\p\b\r\t\1\b\v\t\i\s\b\d\v\t\s\c\7\o\x\r\g\j\j\t\t\j\i\r\u\o\x\e\l\h\p\5\o\4\s\o\p\f\t\w\2\x\2\z\8\x\5\s\j\r\1\k\6\1\k\1\a\6\9\q\a\t\n\x\3\g\x\s\3\j\4\z\9\u\v\z\s\w\2\y\7\i\z\e\r\7\f\a\t\d\a\6\z\a\9\4\d\z\t\9\s\6\6\h\g\a\8\d\5\a\p\2\5\4\d\l\x\4\6\s\4\4\a\r\d\i\e\i\p\z\f\h\h\m\0\4\5\i\q\k\k\t\p\q\w\y\l\n\3\y\3\z\o\u\t\c\y\b\4\a\n\s\3\k\r\6\o\5\a\0\2\2\z\f\f\3\0\i\8\x\a\b\6\g\6\6\v\j\m\7\i\x\s\2\k\o\a\v\s\3\p\m\c\w\h\h\4\0\t\m\9\h\p\s\p\e\w\4\f\3\h\d\c\x\n\6\c\5\j\l\v\m\v\u\b\x\x\f\3\n\6\m\p\x\0\p\f\n\w\4\6\n\0\a\q\v\8\q\w\c\1\x\6\e\u\9\3\g\a\2\z\2\0\t\f\f\4\v\w\n\t\g\c\b\v\n\o\e\z\a\g\s\g\j\6\8\c\1\i\2\2\f\e\t\e\t\f\j\j\7\j\y\4\5\y\r\a\t\u\q\h\d\r\p\v\k\g\x\v\z\u\r\h\g\s\p\e\w\h\c\4\n\x\m\l\m\0\4\a\r\d\v\3\7\d\0\o\q\v\d\4\w\h\e\w\f\j\b\i\p\w\x\5\m\q\m\o\g\4\c\k\7\w\1\f\h\3\h\w\m\7\g\n\u\x ]] 00:06:14.465 10:49:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.465 10:49:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:14.465 [2024-12-09 10:49:07.521415] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:14.465 [2024-12-09 10:49:07.521506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60838 ] 00:06:14.724 [2024-12-09 10:49:07.677892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.724 [2024-12-09 10:49:07.734970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.724 [2024-12-09 10:49:07.778210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.724  [2024-12-09T10:49:08.162Z] Copying: 512/512 [B] (average 250 kBps) 00:06:14.983 00:06:14.983 10:49:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2rrr8p6neblnr1heh98kaqop96qm84y2banwhs1qw1poi2brw0zqhx95zl0tyckfkfe8rvegq926amj8g1py3714bi7t380o6g0de19msfmham2p3pbrt1bvtisbdvtsc7oxrgjjttjiruoxelhp5o4sopftw2x2z8x5sjr1k61k1a69qatnx3gxs3j4z9uvzsw2y7izer7fatda6za94dzt9s66hga8d5ap254dlx46s44ardieipzfhhm045iqkktpqwyln3y3zoutcyb4ans3kr6o5a022zff30i8xab6g66vjm7ixs2koavs3pmcwhh40tm9hpspew4f3hdcxn6c5jlvmvubxxf3n6mpx0pfnw46n0aqv8qwc1x6eu93ga2z20tff4vwntgcbvnoezagsgj68c1i22fetetfjj7jy45yratuqhdrpvkgxvzurhgspewhc4nxmlm04ardv37d0oqvd4whewfjbipwx5mqmog4ck7w1fh3hwm7gnux == \2\r\r\r\8\p\6\n\e\b\l\n\r\1\h\e\h\9\8\k\a\q\o\p\9\6\q\m\8\4\y\2\b\a\n\w\h\s\1\q\w\1\p\o\i\2\b\r\w\0\z\q\h\x\9\5\z\l\0\t\y\c\k\f\k\f\e\8\r\v\e\g\q\9\2\6\a\m\j\8\g\1\p\y\3\7\1\4\b\i\7\t\3\8\0\o\6\g\0\d\e\1\9\m\s\f\m\h\a\m\2\p\3\p\b\r\t\1\b\v\t\i\s\b\d\v\t\s\c\7\o\x\r\g\j\j\t\t\j\i\r\u\o\x\e\l\h\p\5\o\4\s\o\p\f\t\w\2\x\2\z\8\x\5\s\j\r\1\k\6\1\k\1\a\6\9\q\a\t\n\x\3\g\x\s\3\j\4\z\9\u\v\z\s\w\2\y\7\i\z\e\r\7\f\a\t\d\a\6\z\a\9\4\d\z\t\9\s\6\6\h\g\a\8\d\5\a\p\2\5\4\d\l\x\4\6\s\4\4\a\r\d\i\e\i\p\z\f\h\h\m\0\4\5\i\q\k\k\t\p\q\w\y\l\n\3\y\3\z\o\u\t\c\y\b\4\a\n\s\3\k\r\6\o\5\a\0\2\2\z\f\f\3\0\i\8\x\a\b\6\g\6\6\v\j\m\7\i\x\s\2\k\o\a\v\s\3\p\m\c\w\h\h\4\0\t\m\9\h\p\s\p\e\w\4\f\3\h\d\c\x\n\6\c\5\j\l\v\m\v\u\b\x\x\f\3\n\6\m\p\x\0\p\f\n\w\4\6\n\0\a\q\v\8\q\w\c\1\x\6\e\u\9\3\g\a\2\z\2\0\t\f\f\4\v\w\n\t\g\c\b\v\n\o\e\z\a\g\s\g\j\6\8\c\1\i\2\2\f\e\t\e\t\f\j\j\7\j\y\4\5\y\r\a\t\u\q\h\d\r\p\v\k\g\x\v\z\u\r\h\g\s\p\e\w\h\c\4\n\x\m\l\m\0\4\a\r\d\v\3\7\d\0\o\q\v\d\4\w\h\e\w\f\j\b\i\p\w\x\5\m\q\m\o\g\4\c\k\7\w\1\f\h\3\h\w\m\7\g\n\u\x ]] 00:06:14.983 10:49:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.983 10:49:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:14.983 [2024-12-09 10:49:08.080217] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:14.983 [2024-12-09 10:49:08.080304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60846 ] 00:06:15.241 [2024-12-09 10:49:08.241464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.241 [2024-12-09 10:49:08.293654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.241 [2024-12-09 10:49:08.336586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.241  [2024-12-09T10:49:08.678Z] Copying: 512/512 [B] (average 166 kBps) 00:06:15.499 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 2rrr8p6neblnr1heh98kaqop96qm84y2banwhs1qw1poi2brw0zqhx95zl0tyckfkfe8rvegq926amj8g1py3714bi7t380o6g0de19msfmham2p3pbrt1bvtisbdvtsc7oxrgjjttjiruoxelhp5o4sopftw2x2z8x5sjr1k61k1a69qatnx3gxs3j4z9uvzsw2y7izer7fatda6za94dzt9s66hga8d5ap254dlx46s44ardieipzfhhm045iqkktpqwyln3y3zoutcyb4ans3kr6o5a022zff30i8xab6g66vjm7ixs2koavs3pmcwhh40tm9hpspew4f3hdcxn6c5jlvmvubxxf3n6mpx0pfnw46n0aqv8qwc1x6eu93ga2z20tff4vwntgcbvnoezagsgj68c1i22fetetfjj7jy45yratuqhdrpvkgxvzurhgspewhc4nxmlm04ardv37d0oqvd4whewfjbipwx5mqmog4ck7w1fh3hwm7gnux == \2\r\r\r\8\p\6\n\e\b\l\n\r\1\h\e\h\9\8\k\a\q\o\p\9\6\q\m\8\4\y\2\b\a\n\w\h\s\1\q\w\1\p\o\i\2\b\r\w\0\z\q\h\x\9\5\z\l\0\t\y\c\k\f\k\f\e\8\r\v\e\g\q\9\2\6\a\m\j\8\g\1\p\y\3\7\1\4\b\i\7\t\3\8\0\o\6\g\0\d\e\1\9\m\s\f\m\h\a\m\2\p\3\p\b\r\t\1\b\v\t\i\s\b\d\v\t\s\c\7\o\x\r\g\j\j\t\t\j\i\r\u\o\x\e\l\h\p\5\o\4\s\o\p\f\t\w\2\x\2\z\8\x\5\s\j\r\1\k\6\1\k\1\a\6\9\q\a\t\n\x\3\g\x\s\3\j\4\z\9\u\v\z\s\w\2\y\7\i\z\e\r\7\f\a\t\d\a\6\z\a\9\4\d\z\t\9\s\6\6\h\g\a\8\d\5\a\p\2\5\4\d\l\x\4\6\s\4\4\a\r\d\i\e\i\p\z\f\h\h\m\0\4\5\i\q\k\k\t\p\q\w\y\l\n\3\y\3\z\o\u\t\c\y\b\4\a\n\s\3\k\r\6\o\5\a\0\2\2\z\f\f\3\0\i\8\x\a\b\6\g\6\6\v\j\m\7\i\x\s\2\k\o\a\v\s\3\p\m\c\w\h\h\4\0\t\m\9\h\p\s\p\e\w\4\f\3\h\d\c\x\n\6\c\5\j\l\v\m\v\u\b\x\x\f\3\n\6\m\p\x\0\p\f\n\w\4\6\n\0\a\q\v\8\q\w\c\1\x\6\e\u\9\3\g\a\2\z\2\0\t\f\f\4\v\w\n\t\g\c\b\v\n\o\e\z\a\g\s\g\j\6\8\c\1\i\2\2\f\e\t\e\t\f\j\j\7\j\y\4\5\y\r\a\t\u\q\h\d\r\p\v\k\g\x\v\z\u\r\h\g\s\p\e\w\h\c\4\n\x\m\l\m\0\4\a\r\d\v\3\7\d\0\o\q\v\d\4\w\h\e\w\f\j\b\i\p\w\x\5\m\q\m\o\g\4\c\k\7\w\1\f\h\3\h\w\m\7\g\n\u\x ]] 00:06:15.500 00:06:15.500 real 0m4.592s 00:06:15.500 user 0m2.639s 00:06:15.500 sys 0m0.974s 00:06:15.500 ************************************ 00:06:15.500 END TEST dd_flags_misc_forced_aio 00:06:15.500 ************************************ 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:15.500 ************************************ 00:06:15.500 END TEST spdk_dd_posix 00:06:15.500 ************************************ 00:06:15.500 00:06:15.500 real 0m20.919s 00:06:15.500 user 0m10.837s 00:06:15.500 sys 0m5.841s 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.500 10:49:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:15.758 10:49:08 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:15.758 10:49:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.758 10:49:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.758 10:49:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:15.758 ************************************ 00:06:15.758 START TEST spdk_dd_malloc 00:06:15.758 ************************************ 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:15.758 * Looking for test storage... 00:06:15.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.758 --rc genhtml_branch_coverage=1 00:06:15.758 --rc genhtml_function_coverage=1 00:06:15.758 --rc genhtml_legend=1 00:06:15.758 --rc geninfo_all_blocks=1 00:06:15.758 --rc geninfo_unexecuted_blocks=1 00:06:15.758 00:06:15.758 ' 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.758 --rc genhtml_branch_coverage=1 00:06:15.758 --rc genhtml_function_coverage=1 00:06:15.758 --rc genhtml_legend=1 00:06:15.758 --rc geninfo_all_blocks=1 00:06:15.758 --rc geninfo_unexecuted_blocks=1 00:06:15.758 00:06:15.758 ' 00:06:15.758 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.758 --rc genhtml_branch_coverage=1 00:06:15.758 --rc genhtml_function_coverage=1 00:06:15.759 --rc genhtml_legend=1 00:06:15.759 --rc geninfo_all_blocks=1 00:06:15.759 --rc geninfo_unexecuted_blocks=1 00:06:15.759 00:06:15.759 ' 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.759 --rc genhtml_branch_coverage=1 00:06:15.759 --rc genhtml_function_coverage=1 00:06:15.759 --rc genhtml_legend=1 00:06:15.759 --rc geninfo_all_blocks=1 00:06:15.759 --rc geninfo_unexecuted_blocks=1 00:06:15.759 00:06:15.759 ' 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:15.759 ************************************ 00:06:15.759 START TEST dd_malloc_copy 00:06:15.759 ************************************ 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:15.759 10:49:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:15.759 [2024-12-09 10:49:08.924149] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:15.759 [2024-12-09 10:49:08.924297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60928 ] 00:06:16.017 { 00:06:16.017 "subsystems": [ 00:06:16.017 { 00:06:16.017 "subsystem": "bdev", 00:06:16.017 "config": [ 00:06:16.017 { 00:06:16.017 "params": { 00:06:16.017 "block_size": 512, 00:06:16.017 "num_blocks": 1048576, 00:06:16.017 "name": "malloc0" 00:06:16.017 }, 00:06:16.017 "method": "bdev_malloc_create" 00:06:16.017 }, 00:06:16.017 { 00:06:16.017 "params": { 00:06:16.017 "block_size": 512, 00:06:16.017 "num_blocks": 1048576, 00:06:16.017 "name": "malloc1" 00:06:16.017 }, 00:06:16.017 "method": "bdev_malloc_create" 00:06:16.017 }, 00:06:16.017 { 00:06:16.017 "method": "bdev_wait_for_examine" 00:06:16.017 } 00:06:16.017 ] 00:06:16.017 } 00:06:16.017 ] 00:06:16.017 } 00:06:16.017 [2024-12-09 10:49:09.081944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.018 [2024-12-09 10:49:09.139304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.018 [2024-12-09 10:49:09.182194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.392  [2024-12-09T10:49:11.505Z] Copying: 203/512 [MB] (203 MBps) [2024-12-09T10:49:12.070Z] Copying: 404/512 [MB] (201 MBps) [2024-12-09T10:49:12.638Z] Copying: 512/512 [MB] (average 202 MBps) 00:06:19.459 00:06:19.459 10:49:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:19.459 10:49:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:19.459 10:49:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:19.459 10:49:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.459 [2024-12-09 10:49:12.543231] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:19.459 [2024-12-09 10:49:12.543352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60970 ] 00:06:19.459 { 00:06:19.459 "subsystems": [ 00:06:19.459 { 00:06:19.459 "subsystem": "bdev", 00:06:19.459 "config": [ 00:06:19.459 { 00:06:19.459 "params": { 00:06:19.459 "block_size": 512, 00:06:19.459 "num_blocks": 1048576, 00:06:19.459 "name": "malloc0" 00:06:19.459 }, 00:06:19.459 "method": "bdev_malloc_create" 00:06:19.459 }, 00:06:19.459 { 00:06:19.459 "params": { 00:06:19.459 "block_size": 512, 00:06:19.459 "num_blocks": 1048576, 00:06:19.459 "name": "malloc1" 00:06:19.459 }, 00:06:19.459 "method": "bdev_malloc_create" 00:06:19.459 }, 00:06:19.459 { 00:06:19.459 "method": "bdev_wait_for_examine" 00:06:19.459 } 00:06:19.459 ] 00:06:19.459 } 00:06:19.459 ] 00:06:19.459 } 00:06:19.718 [2024-12-09 10:49:12.694870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.718 [2024-12-09 10:49:12.747305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.718 [2024-12-09 10:49:12.790833] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.096  [2024-12-09T10:49:15.212Z] Copying: 203/512 [MB] (203 MBps) [2024-12-09T10:49:15.775Z] Copying: 408/512 [MB] (205 MBps) [2024-12-09T10:49:16.341Z] Copying: 512/512 [MB] (average 205 MBps) 00:06:23.162 00:06:23.162 00:06:23.162 real 0m7.205s 00:06:23.162 user 0m6.370s 00:06:23.162 sys 0m0.672s 00:06:23.162 10:49:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.162 10:49:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:23.162 ************************************ 00:06:23.162 END TEST dd_malloc_copy 00:06:23.162 ************************************ 00:06:23.162 00:06:23.162 real 0m7.434s 00:06:23.162 user 0m6.488s 00:06:23.162 sys 0m0.793s 00:06:23.162 ************************************ 00:06:23.162 END TEST spdk_dd_malloc 00:06:23.162 ************************************ 00:06:23.162 10:49:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.162 10:49:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:23.162 10:49:16 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:23.162 10:49:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:23.162 10:49:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.162 10:49:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:23.162 ************************************ 00:06:23.162 START TEST spdk_dd_bdev_to_bdev 00:06:23.162 ************************************ 00:06:23.162 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:23.162 * Looking for test storage... 00:06:23.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:23.162 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.162 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.162 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.421 --rc genhtml_branch_coverage=1 00:06:23.421 --rc genhtml_function_coverage=1 00:06:23.421 --rc genhtml_legend=1 00:06:23.421 --rc geninfo_all_blocks=1 00:06:23.421 --rc geninfo_unexecuted_blocks=1 00:06:23.421 00:06:23.421 ' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.421 --rc genhtml_branch_coverage=1 00:06:23.421 --rc genhtml_function_coverage=1 00:06:23.421 --rc genhtml_legend=1 00:06:23.421 --rc geninfo_all_blocks=1 00:06:23.421 --rc geninfo_unexecuted_blocks=1 00:06:23.421 00:06:23.421 ' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.421 --rc genhtml_branch_coverage=1 00:06:23.421 --rc genhtml_function_coverage=1 00:06:23.421 --rc genhtml_legend=1 00:06:23.421 --rc geninfo_all_blocks=1 00:06:23.421 --rc geninfo_unexecuted_blocks=1 00:06:23.421 00:06:23.421 ' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.421 --rc genhtml_branch_coverage=1 00:06:23.421 --rc genhtml_function_coverage=1 00:06:23.421 --rc genhtml_legend=1 00:06:23.421 --rc geninfo_all_blocks=1 00:06:23.421 --rc geninfo_unexecuted_blocks=1 00:06:23.421 00:06:23.421 ' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:23.421 ************************************ 00:06:23.421 START TEST dd_inflate_file 00:06:23.421 ************************************ 00:06:23.421 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:23.421 [2024-12-09 10:49:16.408886] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:23.421 [2024-12-09 10:49:16.409012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61088 ] 00:06:23.421 [2024-12-09 10:49:16.559693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.681 [2024-12-09 10:49:16.616043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.681 [2024-12-09 10:49:16.658874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.681  [2024-12-09T10:49:17.118Z] Copying: 64/64 [MB] (average 2000 MBps) 00:06:23.939 00:06:23.939 00:06:23.939 real 0m0.558s 00:06:23.939 user 0m0.343s 00:06:23.939 sys 0m0.250s 00:06:23.939 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.939 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:23.939 ************************************ 00:06:23.939 END TEST dd_inflate_file 00:06:23.939 ************************************ 00:06:23.939 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:23.939 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:23.939 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:23.939 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:23.940 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.940 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:23.940 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:23.940 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:23.940 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:23.940 ************************************ 00:06:23.940 START TEST dd_copy_to_out_bdev 00:06:23.940 ************************************ 00:06:23.940 10:49:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:23.940 [2024-12-09 10:49:17.034669] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:23.940 [2024-12-09 10:49:17.034807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61125 ] 00:06:23.940 { 00:06:23.940 "subsystems": [ 00:06:23.940 { 00:06:23.940 "subsystem": "bdev", 00:06:23.940 "config": [ 00:06:23.940 { 00:06:23.940 "params": { 00:06:23.940 "trtype": "pcie", 00:06:23.940 "traddr": "0000:00:10.0", 00:06:23.940 "name": "Nvme0" 00:06:23.940 }, 00:06:23.940 "method": "bdev_nvme_attach_controller" 00:06:23.940 }, 00:06:23.940 { 00:06:23.940 "params": { 00:06:23.940 "trtype": "pcie", 00:06:23.940 "traddr": "0000:00:11.0", 00:06:23.940 "name": "Nvme1" 00:06:23.940 }, 00:06:23.940 "method": "bdev_nvme_attach_controller" 00:06:23.940 }, 00:06:23.940 { 00:06:23.940 "method": "bdev_wait_for_examine" 00:06:23.940 } 00:06:23.940 ] 00:06:23.940 } 00:06:23.940 ] 00:06:23.940 } 00:06:24.198 [2024-12-09 10:49:17.187042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.198 [2024-12-09 10:49:17.240478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.198 [2024-12-09 10:49:17.284611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.129  [2024-12-09T10:49:18.565Z] Copying: 64/64 [MB] (average 78 MBps) 00:06:25.386 00:06:25.386 00:06:25.386 real 0m1.559s 00:06:25.386 user 0m1.349s 00:06:25.386 sys 0m1.155s 00:06:25.386 ************************************ 00:06:25.386 END TEST dd_copy_to_out_bdev 00:06:25.386 ************************************ 00:06:25.386 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.386 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:25.644 ************************************ 00:06:25.644 START TEST dd_offset_magic 00:06:25.644 ************************************ 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:25.644 10:49:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:25.644 [2024-12-09 10:49:18.643935] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:25.644 [2024-12-09 10:49:18.644079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61161 ] 00:06:25.644 { 00:06:25.644 "subsystems": [ 00:06:25.644 { 00:06:25.644 "subsystem": "bdev", 00:06:25.644 "config": [ 00:06:25.644 { 00:06:25.644 "params": { 00:06:25.644 "trtype": "pcie", 00:06:25.644 "traddr": "0000:00:10.0", 00:06:25.644 "name": "Nvme0" 00:06:25.644 }, 00:06:25.644 "method": "bdev_nvme_attach_controller" 00:06:25.644 }, 00:06:25.644 { 00:06:25.644 "params": { 00:06:25.644 "trtype": "pcie", 00:06:25.644 "traddr": "0000:00:11.0", 00:06:25.644 "name": "Nvme1" 00:06:25.644 }, 00:06:25.644 "method": "bdev_nvme_attach_controller" 00:06:25.644 }, 00:06:25.644 { 00:06:25.644 "method": "bdev_wait_for_examine" 00:06:25.644 } 00:06:25.644 ] 00:06:25.644 } 00:06:25.644 ] 00:06:25.644 } 00:06:25.644 [2024-12-09 10:49:18.781867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.902 [2024-12-09 10:49:18.839413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.902 [2024-12-09 10:49:18.881728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.161  [2024-12-09T10:49:19.599Z] Copying: 65/65 [MB] (average 764 MBps) 00:06:26.420 00:06:26.420 10:49:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:26.420 10:49:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:26.420 10:49:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:26.420 10:49:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:26.420 [2024-12-09 10:49:19.475708] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:26.420 [2024-12-09 10:49:19.475811] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61181 ] 00:06:26.420 { 00:06:26.420 "subsystems": [ 00:06:26.420 { 00:06:26.420 "subsystem": "bdev", 00:06:26.420 "config": [ 00:06:26.420 { 00:06:26.420 "params": { 00:06:26.420 "trtype": "pcie", 00:06:26.420 "traddr": "0000:00:10.0", 00:06:26.420 "name": "Nvme0" 00:06:26.420 }, 00:06:26.420 "method": "bdev_nvme_attach_controller" 00:06:26.420 }, 00:06:26.420 { 00:06:26.420 "params": { 00:06:26.420 "trtype": "pcie", 00:06:26.420 "traddr": "0000:00:11.0", 00:06:26.420 "name": "Nvme1" 00:06:26.420 }, 00:06:26.420 "method": "bdev_nvme_attach_controller" 00:06:26.420 }, 00:06:26.420 { 00:06:26.420 "method": "bdev_wait_for_examine" 00:06:26.420 } 00:06:26.420 ] 00:06:26.420 } 00:06:26.420 ] 00:06:26.420 } 00:06:26.677 [2024-12-09 10:49:19.634862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.677 [2024-12-09 10:49:19.686430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.677 [2024-12-09 10:49:19.728572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.936  [2024-12-09T10:49:20.115Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:26.936 00:06:26.936 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:26.936 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:27.194 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:27.194 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:27.194 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:27.194 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:27.194 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:27.194 [2024-12-09 10:49:20.150508] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:27.194 [2024-12-09 10:49:20.150638] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61203 ] 00:06:27.194 { 00:06:27.194 "subsystems": [ 00:06:27.194 { 00:06:27.194 "subsystem": "bdev", 00:06:27.194 "config": [ 00:06:27.194 { 00:06:27.194 "params": { 00:06:27.194 "trtype": "pcie", 00:06:27.194 "traddr": "0000:00:10.0", 00:06:27.194 "name": "Nvme0" 00:06:27.194 }, 00:06:27.194 "method": "bdev_nvme_attach_controller" 00:06:27.194 }, 00:06:27.194 { 00:06:27.194 "params": { 00:06:27.194 "trtype": "pcie", 00:06:27.194 "traddr": "0000:00:11.0", 00:06:27.194 "name": "Nvme1" 00:06:27.194 }, 00:06:27.194 "method": "bdev_nvme_attach_controller" 00:06:27.194 }, 00:06:27.194 { 00:06:27.194 "method": "bdev_wait_for_examine" 00:06:27.194 } 00:06:27.194 ] 00:06:27.194 } 00:06:27.194 ] 00:06:27.194 } 00:06:27.194 [2024-12-09 10:49:20.303227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.194 [2024-12-09 10:49:20.354286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.452 [2024-12-09 10:49:20.396948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.710  [2024-12-09T10:49:21.147Z] Copying: 65/65 [MB] (average 955 MBps) 00:06:27.968 00:06:27.968 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:27.968 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:27.968 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:27.968 10:49:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:27.968 [2024-12-09 10:49:20.961702] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:27.968 [2024-12-09 10:49:20.961857] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61218 ] 00:06:27.968 { 00:06:27.968 "subsystems": [ 00:06:27.968 { 00:06:27.968 "subsystem": "bdev", 00:06:27.968 "config": [ 00:06:27.968 { 00:06:27.968 "params": { 00:06:27.968 "trtype": "pcie", 00:06:27.968 "traddr": "0000:00:10.0", 00:06:27.968 "name": "Nvme0" 00:06:27.968 }, 00:06:27.968 "method": "bdev_nvme_attach_controller" 00:06:27.968 }, 00:06:27.968 { 00:06:27.968 "params": { 00:06:27.968 "trtype": "pcie", 00:06:27.968 "traddr": "0000:00:11.0", 00:06:27.968 "name": "Nvme1" 00:06:27.968 }, 00:06:27.968 "method": "bdev_nvme_attach_controller" 00:06:27.968 }, 00:06:27.968 { 00:06:27.968 "method": "bdev_wait_for_examine" 00:06:27.968 } 00:06:27.968 ] 00:06:27.968 } 00:06:27.968 ] 00:06:27.968 } 00:06:27.968 [2024-12-09 10:49:21.113411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.225 [2024-12-09 10:49:21.165322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.225 [2024-12-09 10:49:21.207727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.225  [2024-12-09T10:49:21.661Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:28.482 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:28.482 00:06:28.482 real 0m3.000s 00:06:28.482 user 0m2.265s 00:06:28.482 sys 0m0.813s 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.482 ************************************ 00:06:28.482 END TEST dd_offset_magic 00:06:28.482 ************************************ 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:28.482 10:49:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:28.740 [2024-12-09 10:49:21.683881] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:28.740 [2024-12-09 10:49:21.684025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61249 ] 00:06:28.740 { 00:06:28.740 "subsystems": [ 00:06:28.740 { 00:06:28.740 "subsystem": "bdev", 00:06:28.740 "config": [ 00:06:28.740 { 00:06:28.740 "params": { 00:06:28.740 "trtype": "pcie", 00:06:28.740 "traddr": "0000:00:10.0", 00:06:28.740 "name": "Nvme0" 00:06:28.740 }, 00:06:28.740 "method": "bdev_nvme_attach_controller" 00:06:28.740 }, 00:06:28.740 { 00:06:28.740 "params": { 00:06:28.740 "trtype": "pcie", 00:06:28.740 "traddr": "0000:00:11.0", 00:06:28.740 "name": "Nvme1" 00:06:28.740 }, 00:06:28.740 "method": "bdev_nvme_attach_controller" 00:06:28.740 }, 00:06:28.740 { 00:06:28.740 "method": "bdev_wait_for_examine" 00:06:28.740 } 00:06:28.740 ] 00:06:28.740 } 00:06:28.740 ] 00:06:28.740 } 00:06:28.740 [2024-12-09 10:49:21.835564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.740 [2024-12-09 10:49:21.890883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.997 [2024-12-09 10:49:21.933176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.997  [2024-12-09T10:49:22.433Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:06:29.254 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:29.254 10:49:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:29.254 [2024-12-09 10:49:22.378564] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:29.254 [2024-12-09 10:49:22.378695] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61270 ] 00:06:29.254 { 00:06:29.254 "subsystems": [ 00:06:29.254 { 00:06:29.254 "subsystem": "bdev", 00:06:29.254 "config": [ 00:06:29.254 { 00:06:29.254 "params": { 00:06:29.254 "trtype": "pcie", 00:06:29.254 "traddr": "0000:00:10.0", 00:06:29.254 "name": "Nvme0" 00:06:29.254 }, 00:06:29.254 "method": "bdev_nvme_attach_controller" 00:06:29.254 }, 00:06:29.254 { 00:06:29.254 "params": { 00:06:29.254 "trtype": "pcie", 00:06:29.254 "traddr": "0000:00:11.0", 00:06:29.254 "name": "Nvme1" 00:06:29.254 }, 00:06:29.254 "method": "bdev_nvme_attach_controller" 00:06:29.254 }, 00:06:29.254 { 00:06:29.254 "method": "bdev_wait_for_examine" 00:06:29.254 } 00:06:29.254 ] 00:06:29.254 } 00:06:29.254 ] 00:06:29.254 } 00:06:29.511 [2024-12-09 10:49:22.530631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.511 [2024-12-09 10:49:22.581313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.511 [2024-12-09 10:49:22.624020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.769  [2024-12-09T10:49:23.205Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:06:30.026 00:06:30.026 10:49:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:30.026 00:06:30.026 real 0m6.867s 00:06:30.026 user 0m5.124s 00:06:30.026 sys 0m2.905s 00:06:30.026 10:49:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.026 10:49:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:30.026 ************************************ 00:06:30.026 END TEST spdk_dd_bdev_to_bdev 00:06:30.026 ************************************ 00:06:30.026 10:49:23 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:30.026 10:49:23 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:30.026 10:49:23 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.026 10:49:23 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.026 10:49:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:30.026 ************************************ 00:06:30.026 START TEST spdk_dd_uring 00:06:30.026 ************************************ 00:06:30.026 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:30.284 * Looking for test storage... 00:06:30.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.284 --rc genhtml_branch_coverage=1 00:06:30.284 --rc genhtml_function_coverage=1 00:06:30.284 --rc genhtml_legend=1 00:06:30.284 --rc geninfo_all_blocks=1 00:06:30.284 --rc geninfo_unexecuted_blocks=1 00:06:30.284 00:06:30.284 ' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.284 --rc genhtml_branch_coverage=1 00:06:30.284 --rc genhtml_function_coverage=1 00:06:30.284 --rc genhtml_legend=1 00:06:30.284 --rc geninfo_all_blocks=1 00:06:30.284 --rc geninfo_unexecuted_blocks=1 00:06:30.284 00:06:30.284 ' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.284 --rc genhtml_branch_coverage=1 00:06:30.284 --rc genhtml_function_coverage=1 00:06:30.284 --rc genhtml_legend=1 00:06:30.284 --rc geninfo_all_blocks=1 00:06:30.284 --rc geninfo_unexecuted_blocks=1 00:06:30.284 00:06:30.284 ' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.284 --rc genhtml_branch_coverage=1 00:06:30.284 --rc genhtml_function_coverage=1 00:06:30.284 --rc genhtml_legend=1 00:06:30.284 --rc geninfo_all_blocks=1 00:06:30.284 --rc geninfo_unexecuted_blocks=1 00:06:30.284 00:06:30.284 ' 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.284 10:49:23 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:30.285 ************************************ 00:06:30.285 START TEST dd_uring_copy 00:06:30.285 ************************************ 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=n14jnve5h7vov3k41ma9wxdi3p6fszrp9xi9q5ut7r27cmbdobgu3ptos50y1t0vp9p4r9rus0jrmgagizuhg8erswea4rdk9b6vp60smz1eok2rctji95aa56qgbinvim8057tdhtk7xja3cdeqiag0svyn6m5e8y5ghv5zyhqstcgnvs9hc34jbqetsx2omho21skaao0mo5wu880m7lde8674jh7rvdvl3tyfx1bg7odaevp4iy5qo3sme5th0pw1ewfi4wba7bdacuzrl0amqt9rf3ydw9wytpi6yd1sbg9iv3jm3doz0ibpoaiy81hfceyuycvutbidovtmkx1pvcdq2t7zhw5w163ke3jnueuafrwhckqumhy8gr3zclwpncx91jcu1h5q8msqr27v4qhdwktcb3dlnxsnv7jvzlfledcm0r3r42leby2uw1xj80ns3u06otr1ij7k5jhfon1sdl4yukwgohqii6yy8riekmnjjnq0oiuh2jncv6udccqj6666z82m11lyvbggvydfdpop24c1fd3vh8ydi0lsg6ynr61v6jzxklq6cn1twlz0s5d0is1l6wxfnjk33n7vukswypxqv6cbzirat13uspg6skipp2fuy70zh1jc6tc9pnscasxhp9nhc701rxzcnmbgxgy26bh7jbg4qbuy1vj1nrwuwefmjcrfy1m4zrwguwrvld1obg0yw94947zda7il6ml4180ekfekd5lfsowqjuru1rrgc12jez6nqg1qsj77q25of3itainoxlm593mlh6migfsi6vk9o0e1epjpe1vrjbk2gavbxrwfrnyym0n7gh1ce618t303vwsbhcbd5ukpg0sotlf8fsbognuqli5sob3026isokn3afnznx4yf94bglnd14ji6yw9i0j68md0d8kctwi3joe7gpd95rtc8gjyjm4bt0lwfuom8q9tjzwduhd7fq1tkcxa23a2t7j7wzn8z30chvjk7ooyt1v6t6fi6syl 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo n14jnve5h7vov3k41ma9wxdi3p6fszrp9xi9q5ut7r27cmbdobgu3ptos50y1t0vp9p4r9rus0jrmgagizuhg8erswea4rdk9b6vp60smz1eok2rctji95aa56qgbinvim8057tdhtk7xja3cdeqiag0svyn6m5e8y5ghv5zyhqstcgnvs9hc34jbqetsx2omho21skaao0mo5wu880m7lde8674jh7rvdvl3tyfx1bg7odaevp4iy5qo3sme5th0pw1ewfi4wba7bdacuzrl0amqt9rf3ydw9wytpi6yd1sbg9iv3jm3doz0ibpoaiy81hfceyuycvutbidovtmkx1pvcdq2t7zhw5w163ke3jnueuafrwhckqumhy8gr3zclwpncx91jcu1h5q8msqr27v4qhdwktcb3dlnxsnv7jvzlfledcm0r3r42leby2uw1xj80ns3u06otr1ij7k5jhfon1sdl4yukwgohqii6yy8riekmnjjnq0oiuh2jncv6udccqj6666z82m11lyvbggvydfdpop24c1fd3vh8ydi0lsg6ynr61v6jzxklq6cn1twlz0s5d0is1l6wxfnjk33n7vukswypxqv6cbzirat13uspg6skipp2fuy70zh1jc6tc9pnscasxhp9nhc701rxzcnmbgxgy26bh7jbg4qbuy1vj1nrwuwefmjcrfy1m4zrwguwrvld1obg0yw94947zda7il6ml4180ekfekd5lfsowqjuru1rrgc12jez6nqg1qsj77q25of3itainoxlm593mlh6migfsi6vk9o0e1epjpe1vrjbk2gavbxrwfrnyym0n7gh1ce618t303vwsbhcbd5ukpg0sotlf8fsbognuqli5sob3026isokn3afnznx4yf94bglnd14ji6yw9i0j68md0d8kctwi3joe7gpd95rtc8gjyjm4bt0lwfuom8q9tjzwduhd7fq1tkcxa23a2t7j7wzn8z30chvjk7ooyt1v6t6fi6syl 00:06:30.285 10:49:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:30.285 [2024-12-09 10:49:23.461041] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:30.285 [2024-12-09 10:49:23.461167] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61350 ] 00:06:30.543 [2024-12-09 10:49:23.613822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.543 [2024-12-09 10:49:23.665678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.543 [2024-12-09 10:49:23.708715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.477  [2024-12-09T10:49:24.914Z] Copying: 511/511 [MB] (average 1257 MBps) 00:06:31.735 00:06:31.735 10:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:31.735 10:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:31.735 10:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:31.735 10:49:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.735 [2024-12-09 10:49:24.718002] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:31.735 [2024-12-09 10:49:24.718082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:06:31.735 { 00:06:31.735 "subsystems": [ 00:06:31.735 { 00:06:31.735 "subsystem": "bdev", 00:06:31.735 "config": [ 00:06:31.735 { 00:06:31.735 "params": { 00:06:31.735 "block_size": 512, 00:06:31.735 "num_blocks": 1048576, 00:06:31.735 "name": "malloc0" 00:06:31.735 }, 00:06:31.735 "method": "bdev_malloc_create" 00:06:31.735 }, 00:06:31.735 { 00:06:31.735 "params": { 00:06:31.735 "filename": "/dev/zram1", 00:06:31.735 "name": "uring0" 00:06:31.735 }, 00:06:31.735 "method": "bdev_uring_create" 00:06:31.735 }, 00:06:31.735 { 00:06:31.735 "method": "bdev_wait_for_examine" 00:06:31.735 } 00:06:31.735 ] 00:06:31.735 } 00:06:31.735 ] 00:06:31.735 } 00:06:31.735 [2024-12-09 10:49:24.863419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.994 [2024-12-09 10:49:24.913777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.994 [2024-12-09 10:49:24.957203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.367  [2024-12-09T10:49:27.177Z] Copying: 233/512 [MB] (233 MBps) [2024-12-09T10:49:27.451Z] Copying: 478/512 [MB] (245 MBps) [2024-12-09T10:49:27.708Z] Copying: 512/512 [MB] (average 240 MBps) 00:06:34.529 00:06:34.529 10:49:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:34.529 10:49:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:34.529 10:49:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:34.529 10:49:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:34.529 [2024-12-09 10:49:27.661581] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:34.529 [2024-12-09 10:49:27.661652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61411 ] 00:06:34.529 { 00:06:34.529 "subsystems": [ 00:06:34.529 { 00:06:34.529 "subsystem": "bdev", 00:06:34.529 "config": [ 00:06:34.529 { 00:06:34.529 "params": { 00:06:34.529 "block_size": 512, 00:06:34.529 "num_blocks": 1048576, 00:06:34.529 "name": "malloc0" 00:06:34.529 }, 00:06:34.529 "method": "bdev_malloc_create" 00:06:34.529 }, 00:06:34.529 { 00:06:34.529 "params": { 00:06:34.529 "filename": "/dev/zram1", 00:06:34.529 "name": "uring0" 00:06:34.529 }, 00:06:34.529 "method": "bdev_uring_create" 00:06:34.529 }, 00:06:34.529 { 00:06:34.529 "method": "bdev_wait_for_examine" 00:06:34.529 } 00:06:34.529 ] 00:06:34.529 } 00:06:34.529 ] 00:06:34.529 } 00:06:34.787 [2024-12-09 10:49:27.814095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.787 [2024-12-09 10:49:27.867993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.787 [2024-12-09 10:49:27.911956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.163  [2024-12-09T10:49:30.274Z] Copying: 197/512 [MB] (197 MBps) [2024-12-09T10:49:30.840Z] Copying: 387/512 [MB] (189 MBps) [2024-12-09T10:49:31.099Z] Copying: 512/512 [MB] (average 192 MBps) 00:06:37.920 00:06:38.179 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:38.179 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ n14jnve5h7vov3k41ma9wxdi3p6fszrp9xi9q5ut7r27cmbdobgu3ptos50y1t0vp9p4r9rus0jrmgagizuhg8erswea4rdk9b6vp60smz1eok2rctji95aa56qgbinvim8057tdhtk7xja3cdeqiag0svyn6m5e8y5ghv5zyhqstcgnvs9hc34jbqetsx2omho21skaao0mo5wu880m7lde8674jh7rvdvl3tyfx1bg7odaevp4iy5qo3sme5th0pw1ewfi4wba7bdacuzrl0amqt9rf3ydw9wytpi6yd1sbg9iv3jm3doz0ibpoaiy81hfceyuycvutbidovtmkx1pvcdq2t7zhw5w163ke3jnueuafrwhckqumhy8gr3zclwpncx91jcu1h5q8msqr27v4qhdwktcb3dlnxsnv7jvzlfledcm0r3r42leby2uw1xj80ns3u06otr1ij7k5jhfon1sdl4yukwgohqii6yy8riekmnjjnq0oiuh2jncv6udccqj6666z82m11lyvbggvydfdpop24c1fd3vh8ydi0lsg6ynr61v6jzxklq6cn1twlz0s5d0is1l6wxfnjk33n7vukswypxqv6cbzirat13uspg6skipp2fuy70zh1jc6tc9pnscasxhp9nhc701rxzcnmbgxgy26bh7jbg4qbuy1vj1nrwuwefmjcrfy1m4zrwguwrvld1obg0yw94947zda7il6ml4180ekfekd5lfsowqjuru1rrgc12jez6nqg1qsj77q25of3itainoxlm593mlh6migfsi6vk9o0e1epjpe1vrjbk2gavbxrwfrnyym0n7gh1ce618t303vwsbhcbd5ukpg0sotlf8fsbognuqli5sob3026isokn3afnznx4yf94bglnd14ji6yw9i0j68md0d8kctwi3joe7gpd95rtc8gjyjm4bt0lwfuom8q9tjzwduhd7fq1tkcxa23a2t7j7wzn8z30chvjk7ooyt1v6t6fi6syl == \n\1\4\j\n\v\e\5\h\7\v\o\v\3\k\4\1\m\a\9\w\x\d\i\3\p\6\f\s\z\r\p\9\x\i\9\q\5\u\t\7\r\2\7\c\m\b\d\o\b\g\u\3\p\t\o\s\5\0\y\1\t\0\v\p\9\p\4\r\9\r\u\s\0\j\r\m\g\a\g\i\z\u\h\g\8\e\r\s\w\e\a\4\r\d\k\9\b\6\v\p\6\0\s\m\z\1\e\o\k\2\r\c\t\j\i\9\5\a\a\5\6\q\g\b\i\n\v\i\m\8\0\5\7\t\d\h\t\k\7\x\j\a\3\c\d\e\q\i\a\g\0\s\v\y\n\6\m\5\e\8\y\5\g\h\v\5\z\y\h\q\s\t\c\g\n\v\s\9\h\c\3\4\j\b\q\e\t\s\x\2\o\m\h\o\2\1\s\k\a\a\o\0\m\o\5\w\u\8\8\0\m\7\l\d\e\8\6\7\4\j\h\7\r\v\d\v\l\3\t\y\f\x\1\b\g\7\o\d\a\e\v\p\4\i\y\5\q\o\3\s\m\e\5\t\h\0\p\w\1\e\w\f\i\4\w\b\a\7\b\d\a\c\u\z\r\l\0\a\m\q\t\9\r\f\3\y\d\w\9\w\y\t\p\i\6\y\d\1\s\b\g\9\i\v\3\j\m\3\d\o\z\0\i\b\p\o\a\i\y\8\1\h\f\c\e\y\u\y\c\v\u\t\b\i\d\o\v\t\m\k\x\1\p\v\c\d\q\2\t\7\z\h\w\5\w\1\6\3\k\e\3\j\n\u\e\u\a\f\r\w\h\c\k\q\u\m\h\y\8\g\r\3\z\c\l\w\p\n\c\x\9\1\j\c\u\1\h\5\q\8\m\s\q\r\2\7\v\4\q\h\d\w\k\t\c\b\3\d\l\n\x\s\n\v\7\j\v\z\l\f\l\e\d\c\m\0\r\3\r\4\2\l\e\b\y\2\u\w\1\x\j\8\0\n\s\3\u\0\6\o\t\r\1\i\j\7\k\5\j\h\f\o\n\1\s\d\l\4\y\u\k\w\g\o\h\q\i\i\6\y\y\8\r\i\e\k\m\n\j\j\n\q\0\o\i\u\h\2\j\n\c\v\6\u\d\c\c\q\j\6\6\6\6\z\8\2\m\1\1\l\y\v\b\g\g\v\y\d\f\d\p\o\p\2\4\c\1\f\d\3\v\h\8\y\d\i\0\l\s\g\6\y\n\r\6\1\v\6\j\z\x\k\l\q\6\c\n\1\t\w\l\z\0\s\5\d\0\i\s\1\l\6\w\x\f\n\j\k\3\3\n\7\v\u\k\s\w\y\p\x\q\v\6\c\b\z\i\r\a\t\1\3\u\s\p\g\6\s\k\i\p\p\2\f\u\y\7\0\z\h\1\j\c\6\t\c\9\p\n\s\c\a\s\x\h\p\9\n\h\c\7\0\1\r\x\z\c\n\m\b\g\x\g\y\2\6\b\h\7\j\b\g\4\q\b\u\y\1\v\j\1\n\r\w\u\w\e\f\m\j\c\r\f\y\1\m\4\z\r\w\g\u\w\r\v\l\d\1\o\b\g\0\y\w\9\4\9\4\7\z\d\a\7\i\l\6\m\l\4\1\8\0\e\k\f\e\k\d\5\l\f\s\o\w\q\j\u\r\u\1\r\r\g\c\1\2\j\e\z\6\n\q\g\1\q\s\j\7\7\q\2\5\o\f\3\i\t\a\i\n\o\x\l\m\5\9\3\m\l\h\6\m\i\g\f\s\i\6\v\k\9\o\0\e\1\e\p\j\p\e\1\v\r\j\b\k\2\g\a\v\b\x\r\w\f\r\n\y\y\m\0\n\7\g\h\1\c\e\6\1\8\t\3\0\3\v\w\s\b\h\c\b\d\5\u\k\p\g\0\s\o\t\l\f\8\f\s\b\o\g\n\u\q\l\i\5\s\o\b\3\0\2\6\i\s\o\k\n\3\a\f\n\z\n\x\4\y\f\9\4\b\g\l\n\d\1\4\j\i\6\y\w\9\i\0\j\6\8\m\d\0\d\8\k\c\t\w\i\3\j\o\e\7\g\p\d\9\5\r\t\c\8\g\j\y\j\m\4\b\t\0\l\w\f\u\o\m\8\q\9\t\j\z\w\d\u\h\d\7\f\q\1\t\k\c\x\a\2\3\a\2\t\7\j\7\w\z\n\8\z\3\0\c\h\v\j\k\7\o\o\y\t\1\v\6\t\6\f\i\6\s\y\l ]] 00:06:38.179 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:38.179 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ n14jnve5h7vov3k41ma9wxdi3p6fszrp9xi9q5ut7r27cmbdobgu3ptos50y1t0vp9p4r9rus0jrmgagizuhg8erswea4rdk9b6vp60smz1eok2rctji95aa56qgbinvim8057tdhtk7xja3cdeqiag0svyn6m5e8y5ghv5zyhqstcgnvs9hc34jbqetsx2omho21skaao0mo5wu880m7lde8674jh7rvdvl3tyfx1bg7odaevp4iy5qo3sme5th0pw1ewfi4wba7bdacuzrl0amqt9rf3ydw9wytpi6yd1sbg9iv3jm3doz0ibpoaiy81hfceyuycvutbidovtmkx1pvcdq2t7zhw5w163ke3jnueuafrwhckqumhy8gr3zclwpncx91jcu1h5q8msqr27v4qhdwktcb3dlnxsnv7jvzlfledcm0r3r42leby2uw1xj80ns3u06otr1ij7k5jhfon1sdl4yukwgohqii6yy8riekmnjjnq0oiuh2jncv6udccqj6666z82m11lyvbggvydfdpop24c1fd3vh8ydi0lsg6ynr61v6jzxklq6cn1twlz0s5d0is1l6wxfnjk33n7vukswypxqv6cbzirat13uspg6skipp2fuy70zh1jc6tc9pnscasxhp9nhc701rxzcnmbgxgy26bh7jbg4qbuy1vj1nrwuwefmjcrfy1m4zrwguwrvld1obg0yw94947zda7il6ml4180ekfekd5lfsowqjuru1rrgc12jez6nqg1qsj77q25of3itainoxlm593mlh6migfsi6vk9o0e1epjpe1vrjbk2gavbxrwfrnyym0n7gh1ce618t303vwsbhcbd5ukpg0sotlf8fsbognuqli5sob3026isokn3afnznx4yf94bglnd14ji6yw9i0j68md0d8kctwi3joe7gpd95rtc8gjyjm4bt0lwfuom8q9tjzwduhd7fq1tkcxa23a2t7j7wzn8z30chvjk7ooyt1v6t6fi6syl == \n\1\4\j\n\v\e\5\h\7\v\o\v\3\k\4\1\m\a\9\w\x\d\i\3\p\6\f\s\z\r\p\9\x\i\9\q\5\u\t\7\r\2\7\c\m\b\d\o\b\g\u\3\p\t\o\s\5\0\y\1\t\0\v\p\9\p\4\r\9\r\u\s\0\j\r\m\g\a\g\i\z\u\h\g\8\e\r\s\w\e\a\4\r\d\k\9\b\6\v\p\6\0\s\m\z\1\e\o\k\2\r\c\t\j\i\9\5\a\a\5\6\q\g\b\i\n\v\i\m\8\0\5\7\t\d\h\t\k\7\x\j\a\3\c\d\e\q\i\a\g\0\s\v\y\n\6\m\5\e\8\y\5\g\h\v\5\z\y\h\q\s\t\c\g\n\v\s\9\h\c\3\4\j\b\q\e\t\s\x\2\o\m\h\o\2\1\s\k\a\a\o\0\m\o\5\w\u\8\8\0\m\7\l\d\e\8\6\7\4\j\h\7\r\v\d\v\l\3\t\y\f\x\1\b\g\7\o\d\a\e\v\p\4\i\y\5\q\o\3\s\m\e\5\t\h\0\p\w\1\e\w\f\i\4\w\b\a\7\b\d\a\c\u\z\r\l\0\a\m\q\t\9\r\f\3\y\d\w\9\w\y\t\p\i\6\y\d\1\s\b\g\9\i\v\3\j\m\3\d\o\z\0\i\b\p\o\a\i\y\8\1\h\f\c\e\y\u\y\c\v\u\t\b\i\d\o\v\t\m\k\x\1\p\v\c\d\q\2\t\7\z\h\w\5\w\1\6\3\k\e\3\j\n\u\e\u\a\f\r\w\h\c\k\q\u\m\h\y\8\g\r\3\z\c\l\w\p\n\c\x\9\1\j\c\u\1\h\5\q\8\m\s\q\r\2\7\v\4\q\h\d\w\k\t\c\b\3\d\l\n\x\s\n\v\7\j\v\z\l\f\l\e\d\c\m\0\r\3\r\4\2\l\e\b\y\2\u\w\1\x\j\8\0\n\s\3\u\0\6\o\t\r\1\i\j\7\k\5\j\h\f\o\n\1\s\d\l\4\y\u\k\w\g\o\h\q\i\i\6\y\y\8\r\i\e\k\m\n\j\j\n\q\0\o\i\u\h\2\j\n\c\v\6\u\d\c\c\q\j\6\6\6\6\z\8\2\m\1\1\l\y\v\b\g\g\v\y\d\f\d\p\o\p\2\4\c\1\f\d\3\v\h\8\y\d\i\0\l\s\g\6\y\n\r\6\1\v\6\j\z\x\k\l\q\6\c\n\1\t\w\l\z\0\s\5\d\0\i\s\1\l\6\w\x\f\n\j\k\3\3\n\7\v\u\k\s\w\y\p\x\q\v\6\c\b\z\i\r\a\t\1\3\u\s\p\g\6\s\k\i\p\p\2\f\u\y\7\0\z\h\1\j\c\6\t\c\9\p\n\s\c\a\s\x\h\p\9\n\h\c\7\0\1\r\x\z\c\n\m\b\g\x\g\y\2\6\b\h\7\j\b\g\4\q\b\u\y\1\v\j\1\n\r\w\u\w\e\f\m\j\c\r\f\y\1\m\4\z\r\w\g\u\w\r\v\l\d\1\o\b\g\0\y\w\9\4\9\4\7\z\d\a\7\i\l\6\m\l\4\1\8\0\e\k\f\e\k\d\5\l\f\s\o\w\q\j\u\r\u\1\r\r\g\c\1\2\j\e\z\6\n\q\g\1\q\s\j\7\7\q\2\5\o\f\3\i\t\a\i\n\o\x\l\m\5\9\3\m\l\h\6\m\i\g\f\s\i\6\v\k\9\o\0\e\1\e\p\j\p\e\1\v\r\j\b\k\2\g\a\v\b\x\r\w\f\r\n\y\y\m\0\n\7\g\h\1\c\e\6\1\8\t\3\0\3\v\w\s\b\h\c\b\d\5\u\k\p\g\0\s\o\t\l\f\8\f\s\b\o\g\n\u\q\l\i\5\s\o\b\3\0\2\6\i\s\o\k\n\3\a\f\n\z\n\x\4\y\f\9\4\b\g\l\n\d\1\4\j\i\6\y\w\9\i\0\j\6\8\m\d\0\d\8\k\c\t\w\i\3\j\o\e\7\g\p\d\9\5\r\t\c\8\g\j\y\j\m\4\b\t\0\l\w\f\u\o\m\8\q\9\t\j\z\w\d\u\h\d\7\f\q\1\t\k\c\x\a\2\3\a\2\t\7\j\7\w\z\n\8\z\3\0\c\h\v\j\k\7\o\o\y\t\1\v\6\t\6\f\i\6\s\y\l ]] 00:06:38.179 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:38.438 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:38.438 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:38.438 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:38.438 10:49:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:38.438 [2024-12-09 10:49:31.414981] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:38.438 [2024-12-09 10:49:31.415138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:06:38.438 { 00:06:38.438 "subsystems": [ 00:06:38.438 { 00:06:38.438 "subsystem": "bdev", 00:06:38.438 "config": [ 00:06:38.438 { 00:06:38.438 "params": { 00:06:38.438 "block_size": 512, 00:06:38.438 "num_blocks": 1048576, 00:06:38.438 "name": "malloc0" 00:06:38.438 }, 00:06:38.438 "method": "bdev_malloc_create" 00:06:38.438 }, 00:06:38.438 { 00:06:38.438 "params": { 00:06:38.438 "filename": "/dev/zram1", 00:06:38.438 "name": "uring0" 00:06:38.438 }, 00:06:38.438 "method": "bdev_uring_create" 00:06:38.438 }, 00:06:38.438 { 00:06:38.438 "method": "bdev_wait_for_examine" 00:06:38.438 } 00:06:38.438 ] 00:06:38.438 } 00:06:38.438 ] 00:06:38.438 } 00:06:38.438 [2024-12-09 10:49:31.566835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.697 [2024-12-09 10:49:31.623052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.697 [2024-12-09 10:49:31.665552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.073  [2024-12-09T10:49:34.186Z] Copying: 206/512 [MB] (206 MBps) [2024-12-09T10:49:34.443Z] Copying: 410/512 [MB] (203 MBps) [2024-12-09T10:49:34.701Z] Copying: 512/512 [MB] (average 205 MBps) 00:06:41.522 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:41.522 10:49:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:41.781 [2024-12-09 10:49:34.714900] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:41.781 [2024-12-09 10:49:34.715030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61516 ] 00:06:41.781 { 00:06:41.781 "subsystems": [ 00:06:41.781 { 00:06:41.781 "subsystem": "bdev", 00:06:41.781 "config": [ 00:06:41.781 { 00:06:41.781 "params": { 00:06:41.781 "block_size": 512, 00:06:41.781 "num_blocks": 1048576, 00:06:41.781 "name": "malloc0" 00:06:41.781 }, 00:06:41.781 "method": "bdev_malloc_create" 00:06:41.781 }, 00:06:41.781 { 00:06:41.781 "params": { 00:06:41.781 "filename": "/dev/zram1", 00:06:41.781 "name": "uring0" 00:06:41.781 }, 00:06:41.781 "method": "bdev_uring_create" 00:06:41.781 }, 00:06:41.781 { 00:06:41.781 "params": { 00:06:41.781 "name": "uring0" 00:06:41.781 }, 00:06:41.781 "method": "bdev_uring_delete" 00:06:41.781 }, 00:06:41.781 { 00:06:41.781 "method": "bdev_wait_for_examine" 00:06:41.781 } 00:06:41.781 ] 00:06:41.781 } 00:06:41.781 ] 00:06:41.781 } 00:06:41.781 [2024-12-09 10:49:34.867170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.781 [2024-12-09 10:49:34.921330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.040 [2024-12-09 10:49:34.963625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.040  [2024-12-09T10:49:35.786Z] Copying: 0/0 [B] (average 0 Bps) 00:06:42.607 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:42.607 10:49:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:42.607 [2024-12-09 10:49:35.543018] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:42.607 [2024-12-09 10:49:35.543109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:06:42.607 { 00:06:42.607 "subsystems": [ 00:06:42.607 { 00:06:42.607 "subsystem": "bdev", 00:06:42.607 "config": [ 00:06:42.607 { 00:06:42.607 "params": { 00:06:42.607 "block_size": 512, 00:06:42.607 "num_blocks": 1048576, 00:06:42.607 "name": "malloc0" 00:06:42.607 }, 00:06:42.607 "method": "bdev_malloc_create" 00:06:42.607 }, 00:06:42.607 { 00:06:42.607 "params": { 00:06:42.607 "filename": "/dev/zram1", 00:06:42.607 "name": "uring0" 00:06:42.607 }, 00:06:42.607 "method": "bdev_uring_create" 00:06:42.607 }, 00:06:42.607 { 00:06:42.607 "params": { 00:06:42.607 "name": "uring0" 00:06:42.607 }, 00:06:42.607 "method": "bdev_uring_delete" 00:06:42.607 }, 00:06:42.607 { 00:06:42.607 "method": "bdev_wait_for_examine" 00:06:42.607 } 00:06:42.607 ] 00:06:42.607 } 00:06:42.607 ] 00:06:42.607 } 00:06:42.607 [2024-12-09 10:49:35.697803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.607 [2024-12-09 10:49:35.752220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.866 [2024-12-09 10:49:35.804227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.866 [2024-12-09 10:49:36.020000] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:42.866 [2024-12-09 10:49:36.020127] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:42.866 [2024-12-09 10:49:36.020152] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:06:42.866 [2024-12-09 10:49:36.020181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.125 [2024-12-09 10:49:36.266872] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:43.384 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:43.644 00:06:43.644 real 0m13.260s 00:06:43.644 user 0m9.069s 00:06:43.644 sys 0m11.404s 00:06:43.644 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.644 10:49:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:43.644 ************************************ 00:06:43.644 END TEST dd_uring_copy 00:06:43.644 ************************************ 00:06:43.644 00:06:43.644 real 0m13.577s 00:06:43.644 user 0m9.236s 00:06:43.644 sys 0m11.569s 00:06:43.644 10:49:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.644 ************************************ 00:06:43.644 END TEST spdk_dd_uring 00:06:43.644 ************************************ 00:06:43.644 10:49:36 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:43.644 10:49:36 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:43.644 10:49:36 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.644 10:49:36 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.644 10:49:36 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.644 ************************************ 00:06:43.644 START TEST spdk_dd_sparse 00:06:43.644 ************************************ 00:06:43.644 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:43.904 * Looking for test storage... 00:06:43.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:43.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.904 --rc genhtml_branch_coverage=1 00:06:43.904 --rc genhtml_function_coverage=1 00:06:43.904 --rc genhtml_legend=1 00:06:43.904 --rc geninfo_all_blocks=1 00:06:43.904 --rc geninfo_unexecuted_blocks=1 00:06:43.904 00:06:43.904 ' 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.904 10:49:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:43.905 1+0 records in 00:06:43.905 1+0 records out 00:06:43.905 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00994476 s, 422 MB/s 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:43.905 1+0 records in 00:06:43.905 1+0 records out 00:06:43.905 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00550915 s, 761 MB/s 00:06:43.905 10:49:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:43.905 1+0 records in 00:06:43.905 1+0 records out 00:06:43.905 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00945622 s, 444 MB/s 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:43.905 ************************************ 00:06:43.905 START TEST dd_sparse_file_to_file 00:06:43.905 ************************************ 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:43.905 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:43.905 [2024-12-09 10:49:37.081350] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:43.905 [2024-12-09 10:49:37.081471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61653 ] 00:06:44.163 { 00:06:44.163 "subsystems": [ 00:06:44.163 { 00:06:44.163 "subsystem": "bdev", 00:06:44.163 "config": [ 00:06:44.163 { 00:06:44.163 "params": { 00:06:44.163 "block_size": 4096, 00:06:44.163 "filename": "dd_sparse_aio_disk", 00:06:44.163 "name": "dd_aio" 00:06:44.163 }, 00:06:44.163 "method": "bdev_aio_create" 00:06:44.163 }, 00:06:44.163 { 00:06:44.163 "params": { 00:06:44.163 "lvs_name": "dd_lvstore", 00:06:44.163 "bdev_name": "dd_aio" 00:06:44.163 }, 00:06:44.163 "method": "bdev_lvol_create_lvstore" 00:06:44.163 }, 00:06:44.163 { 00:06:44.163 "method": "bdev_wait_for_examine" 00:06:44.163 } 00:06:44.163 ] 00:06:44.163 } 00:06:44.163 ] 00:06:44.163 } 00:06:44.163 [2024-12-09 10:49:37.235353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.163 [2024-12-09 10:49:37.288016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.163 [2024-12-09 10:49:37.328655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.422  [2024-12-09T10:49:37.860Z] Copying: 12/36 [MB] (average 750 MBps) 00:06:44.681 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:44.681 00:06:44.681 real 0m0.650s 00:06:44.681 user 0m0.412s 00:06:44.681 sys 0m0.320s 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:44.681 ************************************ 00:06:44.681 END TEST dd_sparse_file_to_file 00:06:44.681 ************************************ 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:44.681 ************************************ 00:06:44.681 START TEST dd_sparse_file_to_bdev 00:06:44.681 ************************************ 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:44.681 10:49:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:44.681 [2024-12-09 10:49:37.792485] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:44.681 [2024-12-09 10:49:37.793016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:06:44.681 { 00:06:44.681 "subsystems": [ 00:06:44.681 { 00:06:44.681 "subsystem": "bdev", 00:06:44.681 "config": [ 00:06:44.681 { 00:06:44.681 "params": { 00:06:44.681 "block_size": 4096, 00:06:44.681 "filename": "dd_sparse_aio_disk", 00:06:44.681 "name": "dd_aio" 00:06:44.681 }, 00:06:44.681 "method": "bdev_aio_create" 00:06:44.681 }, 00:06:44.681 { 00:06:44.681 "params": { 00:06:44.681 "lvs_name": "dd_lvstore", 00:06:44.681 "lvol_name": "dd_lvol", 00:06:44.681 "size_in_mib": 36, 00:06:44.681 "thin_provision": true 00:06:44.681 }, 00:06:44.681 "method": "bdev_lvol_create" 00:06:44.681 }, 00:06:44.681 { 00:06:44.681 "method": "bdev_wait_for_examine" 00:06:44.681 } 00:06:44.681 ] 00:06:44.681 } 00:06:44.681 ] 00:06:44.681 } 00:06:44.941 [2024-12-09 10:49:37.943308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.941 [2024-12-09 10:49:37.996847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.941 [2024-12-09 10:49:38.038735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.200  [2024-12-09T10:49:38.379Z] Copying: 12/36 [MB] (average 375 MBps) 00:06:45.200 00:06:45.200 00:06:45.200 real 0m0.610s 00:06:45.200 user 0m0.400s 00:06:45.200 sys 0m0.302s 00:06:45.200 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.200 ************************************ 00:06:45.200 END TEST dd_sparse_file_to_bdev 00:06:45.200 ************************************ 00:06:45.200 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:45.459 ************************************ 00:06:45.459 START TEST dd_sparse_bdev_to_file 00:06:45.459 ************************************ 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:45.459 10:49:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:45.459 [2024-12-09 10:49:38.469630] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:45.459 [2024-12-09 10:49:38.469794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61740 ] 00:06:45.459 { 00:06:45.459 "subsystems": [ 00:06:45.459 { 00:06:45.459 "subsystem": "bdev", 00:06:45.459 "config": [ 00:06:45.459 { 00:06:45.459 "params": { 00:06:45.459 "block_size": 4096, 00:06:45.459 "filename": "dd_sparse_aio_disk", 00:06:45.459 "name": "dd_aio" 00:06:45.459 }, 00:06:45.459 "method": "bdev_aio_create" 00:06:45.459 }, 00:06:45.459 { 00:06:45.459 "method": "bdev_wait_for_examine" 00:06:45.459 } 00:06:45.459 ] 00:06:45.459 } 00:06:45.459 ] 00:06:45.459 } 00:06:45.459 [2024-12-09 10:49:38.622718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.717 [2024-12-09 10:49:38.675027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.717 [2024-12-09 10:49:38.715106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.717  [2024-12-09T10:49:39.155Z] Copying: 12/36 [MB] (average 600 MBps) 00:06:45.976 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:45.976 ************************************ 00:06:45.976 END TEST dd_sparse_bdev_to_file 00:06:45.976 ************************************ 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:45.976 00:06:45.976 real 0m0.629s 00:06:45.976 user 0m0.409s 00:06:45.976 sys 0m0.319s 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:45.976 ************************************ 00:06:45.976 END TEST spdk_dd_sparse 00:06:45.976 ************************************ 00:06:45.976 00:06:45.976 real 0m2.390s 00:06:45.976 user 0m1.432s 00:06:45.976 sys 0m1.248s 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.976 10:49:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:46.234 10:49:39 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:46.235 10:49:39 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.235 10:49:39 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.235 10:49:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:46.235 ************************************ 00:06:46.235 START TEST spdk_dd_negative 00:06:46.235 ************************************ 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:46.235 * Looking for test storage... 00:06:46.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.235 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.494 --rc genhtml_branch_coverage=1 00:06:46.494 --rc genhtml_function_coverage=1 00:06:46.494 --rc genhtml_legend=1 00:06:46.494 --rc geninfo_all_blocks=1 00:06:46.494 --rc geninfo_unexecuted_blocks=1 00:06:46.494 00:06:46.494 ' 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.494 --rc genhtml_branch_coverage=1 00:06:46.494 --rc genhtml_function_coverage=1 00:06:46.494 --rc genhtml_legend=1 00:06:46.494 --rc geninfo_all_blocks=1 00:06:46.494 --rc geninfo_unexecuted_blocks=1 00:06:46.494 00:06:46.494 ' 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.494 --rc genhtml_branch_coverage=1 00:06:46.494 --rc genhtml_function_coverage=1 00:06:46.494 --rc genhtml_legend=1 00:06:46.494 --rc geninfo_all_blocks=1 00:06:46.494 --rc geninfo_unexecuted_blocks=1 00:06:46.494 00:06:46.494 ' 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.494 --rc genhtml_branch_coverage=1 00:06:46.494 --rc genhtml_function_coverage=1 00:06:46.494 --rc genhtml_legend=1 00:06:46.494 --rc geninfo_all_blocks=1 00:06:46.494 --rc geninfo_unexecuted_blocks=1 00:06:46.494 00:06:46.494 ' 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.494 10:49:39 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.495 ************************************ 00:06:46.495 START TEST dd_invalid_arguments 00:06:46.495 ************************************ 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.495 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:46.495 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:46.495 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:46.495 00:06:46.495 CPU options: 00:06:46.495 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:46.495 (like [0,1,10]) 00:06:46.495 --lcores lcore to CPU mapping list. The list is in the format: 00:06:46.495 [<,lcores[@CPUs]>...] 00:06:46.495 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:46.495 Within the group, '-' is used for range separator, 00:06:46.495 ',' is used for single number separator. 00:06:46.495 '( )' can be omitted for single element group, 00:06:46.495 '@' can be omitted if cpus and lcores have the same value 00:06:46.495 --disable-cpumask-locks Disable CPU core lock files. 00:06:46.495 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:46.495 pollers in the app support interrupt mode) 00:06:46.495 -p, --main-core main (primary) core for DPDK 00:06:46.495 00:06:46.495 Configuration options: 00:06:46.495 -c, --config, --json JSON config file 00:06:46.495 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:46.495 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:46.495 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:46.495 --rpcs-allowed comma-separated list of permitted RPCS 00:06:46.495 --json-ignore-init-errors don't exit on invalid config entry 00:06:46.495 00:06:46.495 Memory options: 00:06:46.495 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:46.495 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:46.495 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:46.495 -R, --huge-unlink unlink huge files after initialization 00:06:46.495 -n, --mem-channels number of memory channels used for DPDK 00:06:46.495 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:46.495 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:46.495 --no-huge run without using hugepages 00:06:46.495 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:46.495 -i, --shm-id shared memory ID (optional) 00:06:46.495 -g, --single-file-segments force creating just one hugetlbfs file 00:06:46.495 00:06:46.495 PCI options: 00:06:46.495 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:46.495 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:46.495 -u, --no-pci disable PCI access 00:06:46.495 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:46.495 00:06:46.495 Log options: 00:06:46.495 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:46.495 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:46.495 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:46.495 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:46.495 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:46.495 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:46.495 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:46.495 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:46.495 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:46.495 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:46.495 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:46.495 --silence-noticelog disable notice level logging to stderr 00:06:46.495 00:06:46.495 Trace options: 00:06:46.495 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:46.495 setting 0 to disable trace (default 32768) 00:06:46.495 Tracepoints vary in size and can use more than one trace entry. 00:06:46.495 -e, --tpoint-group [:] 00:06:46.495 [2024-12-09 10:49:39.510518] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:06:46.495 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:46.495 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:46.495 bdev_raid, scheduler, all). 00:06:46.495 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:46.495 a tracepoint group. First tpoint inside a group can be enabled by 00:06:46.495 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:46.495 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:46.495 in /include/spdk_internal/trace_defs.h 00:06:46.495 00:06:46.495 Other options: 00:06:46.495 -h, --help show this usage 00:06:46.495 -v, --version print SPDK version 00:06:46.495 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:46.495 --env-context Opaque context for use of the env implementation 00:06:46.495 00:06:46.495 Application specific: 00:06:46.495 [--------- DD Options ---------] 00:06:46.495 --if Input file. Must specify either --if or --ib. 00:06:46.495 --ib Input bdev. Must specifier either --if or --ib 00:06:46.495 --of Output file. Must specify either --of or --ob. 00:06:46.495 --ob Output bdev. Must specify either --of or --ob. 00:06:46.495 --iflag Input file flags. 00:06:46.495 --oflag Output file flags. 00:06:46.495 --bs I/O unit size (default: 4096) 00:06:46.495 --qd Queue depth (default: 2) 00:06:46.495 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:46.495 --skip Skip this many I/O units at start of input. (default: 0) 00:06:46.495 --seek Skip this many I/O units at start of output. (default: 0) 00:06:46.495 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:46.495 --sparse Enable hole skipping in input target 00:06:46.495 Available iflag and oflag values: 00:06:46.495 append - append mode 00:06:46.495 direct - use direct I/O for data 00:06:46.495 directory - fail unless a directory 00:06:46.495 dsync - use synchronized I/O for data 00:06:46.495 noatime - do not update access time 00:06:46.496 noctty - do not assign controlling terminal from file 00:06:46.496 nofollow - do not follow symlinks 00:06:46.496 nonblock - use non-blocking I/O 00:06:46.496 sync - use synchronized I/O for data and metadata 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.496 ************************************ 00:06:46.496 END TEST dd_invalid_arguments 00:06:46.496 ************************************ 00:06:46.496 00:06:46.496 real 0m0.076s 00:06:46.496 user 0m0.041s 00:06:46.496 sys 0m0.032s 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.496 ************************************ 00:06:46.496 START TEST dd_double_input 00:06:46.496 ************************************ 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.496 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:46.496 [2024-12-09 10:49:39.660049] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.755 00:06:46.755 real 0m0.077s 00:06:46.755 user 0m0.042s 00:06:46.755 sys 0m0.033s 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:46.755 ************************************ 00:06:46.755 END TEST dd_double_input 00:06:46.755 ************************************ 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.755 ************************************ 00:06:46.755 START TEST dd_double_output 00:06:46.755 ************************************ 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:46.755 [2024-12-09 10:49:39.806128] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.755 ************************************ 00:06:46.755 END TEST dd_double_output 00:06:46.755 ************************************ 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.755 00:06:46.755 real 0m0.076s 00:06:46.755 user 0m0.039s 00:06:46.755 sys 0m0.036s 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.755 ************************************ 00:06:46.755 START TEST dd_no_input 00:06:46.755 ************************************ 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:46.755 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.756 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:47.014 [2024-12-09 10:49:39.945691] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:06:47.014 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:06:47.014 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.014 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.014 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.014 00:06:47.014 real 0m0.074s 00:06:47.014 user 0m0.034s 00:06:47.014 sys 0m0.039s 00:06:47.014 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.014 ************************************ 00:06:47.014 END TEST dd_no_input 00:06:47.014 ************************************ 00:06:47.014 10:49:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.014 ************************************ 00:06:47.014 START TEST dd_no_output 00:06:47.014 ************************************ 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.014 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.015 [2024-12-09 10:49:40.082213] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.015 ************************************ 00:06:47.015 END TEST dd_no_output 00:06:47.015 ************************************ 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.015 00:06:47.015 real 0m0.079s 00:06:47.015 user 0m0.038s 00:06:47.015 sys 0m0.039s 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.015 ************************************ 00:06:47.015 START TEST dd_wrong_blocksize 00:06:47.015 ************************************ 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.015 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:47.274 [2024-12-09 10:49:40.224068] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.274 00:06:47.274 real 0m0.074s 00:06:47.274 user 0m0.043s 00:06:47.274 sys 0m0.030s 00:06:47.274 ************************************ 00:06:47.274 END TEST dd_wrong_blocksize 00:06:47.274 ************************************ 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.274 ************************************ 00:06:47.274 START TEST dd_smaller_blocksize 00:06:47.274 ************************************ 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.274 10:49:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:47.274 [2024-12-09 10:49:40.357316] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:47.274 [2024-12-09 10:49:40.357388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:06:47.533 [2024-12-09 10:49:40.509377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.533 [2024-12-09 10:49:40.563918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.533 [2024-12-09 10:49:40.605965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.791 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:48.049 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:48.049 [2024-12-09 10:49:41.084762] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:48.049 [2024-12-09 10:49:41.084815] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.049 [2024-12-09 10:49:41.182648] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:48.307 ************************************ 00:06:48.307 END TEST dd_smaller_blocksize 00:06:48.307 ************************************ 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.307 00:06:48.307 real 0m0.988s 00:06:48.307 user 0m0.389s 00:06:48.307 sys 0m0.492s 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.307 ************************************ 00:06:48.307 START TEST dd_invalid_count 00:06:48.307 ************************************ 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.307 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:48.308 [2024-12-09 10:49:41.402106] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:06:48.308 ************************************ 00:06:48.308 END TEST dd_invalid_count 00:06:48.308 ************************************ 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.308 00:06:48.308 real 0m0.070s 00:06:48.308 user 0m0.040s 00:06:48.308 sys 0m0.028s 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.308 ************************************ 00:06:48.308 START TEST dd_invalid_oflag 00:06:48.308 ************************************ 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.308 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:48.566 [2024-12-09 10:49:41.531624] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.566 ************************************ 00:06:48.566 END TEST dd_invalid_oflag 00:06:48.566 ************************************ 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.566 00:06:48.566 real 0m0.070s 00:06:48.566 user 0m0.037s 00:06:48.566 sys 0m0.033s 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 ************************************ 00:06:48.566 START TEST dd_invalid_iflag 00:06:48.566 ************************************ 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:48.566 [2024-12-09 10:49:41.661159] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:06:48.566 ************************************ 00:06:48.566 END TEST dd_invalid_iflag 00:06:48.566 ************************************ 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.566 00:06:48.566 real 0m0.072s 00:06:48.566 user 0m0.044s 00:06:48.566 sys 0m0.027s 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.566 ************************************ 00:06:48.566 START TEST dd_unknown_flag 00:06:48.566 ************************************ 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.566 10:49:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:48.825 [2024-12-09 10:49:41.784502] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:48.825 [2024-12-09 10:49:41.784643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62064 ] 00:06:48.825 [2024-12-09 10:49:41.935440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.825 [2024-12-09 10:49:41.988897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.083 [2024-12-09 10:49:42.029621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.083 [2024-12-09 10:49:42.060481] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:06:49.083 [2024-12-09 10:49:42.060609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.083 [2024-12-09 10:49:42.060658] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:06:49.083 [2024-12-09 10:49:42.060666] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.083 [2024-12-09 10:49:42.060869] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:49.083 [2024-12-09 10:49:42.060880] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.083 [2024-12-09 10:49:42.060926] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:49.083 [2024-12-09 10:49:42.060933] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:49.083 [2024-12-09 10:49:42.157244] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:49.340 ************************************ 00:06:49.340 END TEST dd_unknown_flag 00:06:49.340 ************************************ 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.340 00:06:49.340 real 0m0.533s 00:06:49.340 user 0m0.309s 00:06:49.340 sys 0m0.127s 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:49.340 ************************************ 00:06:49.340 START TEST dd_invalid_json 00:06:49.340 ************************************ 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:49.340 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:49.340 [2024-12-09 10:49:42.380601] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:49.340 [2024-12-09 10:49:42.380673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62091 ] 00:06:49.598 [2024-12-09 10:49:42.532339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.598 [2024-12-09 10:49:42.588084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.598 [2024-12-09 10:49:42.588149] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:49.598 [2024-12-09 10:49:42.588162] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:49.598 [2024-12-09 10:49:42.588168] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.598 [2024-12-09 10:49:42.588197] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:06:49.598 ************************************ 00:06:49.598 END TEST dd_invalid_json 00:06:49.598 ************************************ 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.598 00:06:49.598 real 0m0.368s 00:06:49.598 user 0m0.209s 00:06:49.598 sys 0m0.058s 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:49.598 ************************************ 00:06:49.598 START TEST dd_invalid_seek 00:06:49.598 ************************************ 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:49.598 10:49:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:49.856 [2024-12-09 10:49:42.813156] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:49.856 [2024-12-09 10:49:42.813240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62122 ] 00:06:49.856 { 00:06:49.856 "subsystems": [ 00:06:49.856 { 00:06:49.856 "subsystem": "bdev", 00:06:49.856 "config": [ 00:06:49.856 { 00:06:49.856 "params": { 00:06:49.856 "block_size": 512, 00:06:49.856 "num_blocks": 512, 00:06:49.856 "name": "malloc0" 00:06:49.856 }, 00:06:49.856 "method": "bdev_malloc_create" 00:06:49.856 }, 00:06:49.856 { 00:06:49.856 "params": { 00:06:49.856 "block_size": 512, 00:06:49.856 "num_blocks": 512, 00:06:49.856 "name": "malloc1" 00:06:49.856 }, 00:06:49.856 "method": "bdev_malloc_create" 00:06:49.856 }, 00:06:49.856 { 00:06:49.856 "method": "bdev_wait_for_examine" 00:06:49.856 } 00:06:49.856 ] 00:06:49.856 } 00:06:49.856 ] 00:06:49.856 } 00:06:49.856 [2024-12-09 10:49:42.967760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.856 [2024-12-09 10:49:43.022613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.114 [2024-12-09 10:49:43.063958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.114 [2024-12-09 10:49:43.120241] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:50.114 [2024-12-09 10:49:43.120294] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.114 [2024-12-09 10:49:43.220411] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.370 00:06:50.370 real 0m0.566s 00:06:50.370 user 0m0.382s 00:06:50.370 sys 0m0.147s 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:50.370 ************************************ 00:06:50.370 END TEST dd_invalid_seek 00:06:50.370 ************************************ 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.370 ************************************ 00:06:50.370 START TEST dd_invalid_skip 00:06:50.370 ************************************ 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.370 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:50.370 { 00:06:50.370 "subsystems": [ 00:06:50.370 { 00:06:50.370 "subsystem": "bdev", 00:06:50.370 "config": [ 00:06:50.370 { 00:06:50.370 "params": { 00:06:50.370 "block_size": 512, 00:06:50.370 "num_blocks": 512, 00:06:50.370 "name": "malloc0" 00:06:50.370 }, 00:06:50.370 "method": "bdev_malloc_create" 00:06:50.370 }, 00:06:50.370 { 00:06:50.370 "params": { 00:06:50.370 "block_size": 512, 00:06:50.370 "num_blocks": 512, 00:06:50.370 "name": "malloc1" 00:06:50.370 }, 00:06:50.370 "method": "bdev_malloc_create" 00:06:50.370 }, 00:06:50.370 { 00:06:50.370 "method": "bdev_wait_for_examine" 00:06:50.370 } 00:06:50.370 ] 00:06:50.370 } 00:06:50.370 ] 00:06:50.370 } 00:06:50.370 [2024-12-09 10:49:43.447979] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:50.370 [2024-12-09 10:49:43.448066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62159 ] 00:06:50.626 [2024-12-09 10:49:43.598063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.626 [2024-12-09 10:49:43.654142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.626 [2024-12-09 10:49:43.695680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.626 [2024-12-09 10:49:43.751397] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:50.626 [2024-12-09 10:49:43.751448] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.884 [2024-12-09 10:49:43.848325] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:06:50.884 ************************************ 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.884 00:06:50.884 real 0m0.570s 00:06:50.884 user 0m0.374s 00:06:50.884 sys 0m0.142s 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.884 10:49:43 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:50.884 END TEST dd_invalid_skip 00:06:50.884 ************************************ 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:50.884 ************************************ 00:06:50.884 START TEST dd_invalid_input_count 00:06:50.884 ************************************ 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:06:50.884 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.885 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:51.142 { 00:06:51.142 "subsystems": [ 00:06:51.142 { 00:06:51.142 "subsystem": "bdev", 00:06:51.142 "config": [ 00:06:51.142 { 00:06:51.142 "params": { 00:06:51.142 "block_size": 512, 00:06:51.142 "num_blocks": 512, 00:06:51.142 "name": "malloc0" 00:06:51.142 }, 00:06:51.142 "method": "bdev_malloc_create" 00:06:51.142 }, 00:06:51.142 { 00:06:51.142 "params": { 00:06:51.142 "block_size": 512, 00:06:51.142 "num_blocks": 512, 00:06:51.142 "name": "malloc1" 00:06:51.142 }, 00:06:51.142 "method": "bdev_malloc_create" 00:06:51.142 }, 00:06:51.142 { 00:06:51.142 "method": "bdev_wait_for_examine" 00:06:51.142 } 00:06:51.142 ] 00:06:51.142 } 00:06:51.142 ] 00:06:51.142 } 00:06:51.142 [2024-12-09 10:49:44.083335] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:51.142 [2024-12-09 10:49:44.083452] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:06:51.142 [2024-12-09 10:49:44.236055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.142 [2024-12-09 10:49:44.284949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.400 [2024-12-09 10:49:44.326291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.400 [2024-12-09 10:49:44.382238] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:51.400 [2024-12-09 10:49:44.382371] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.400 [2024-12-09 10:49:44.481350] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:51.657 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.658 ************************************ 00:06:51.658 END TEST dd_invalid_input_count 00:06:51.658 ************************************ 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.658 00:06:51.658 real 0m0.567s 00:06:51.658 user 0m0.382s 00:06:51.658 sys 0m0.144s 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:51.658 ************************************ 00:06:51.658 START TEST dd_invalid_output_count 00:06:51.658 ************************************ 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.658 10:49:44 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:51.658 [2024-12-09 10:49:44.698209] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:51.658 [2024-12-09 10:49:44.698337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62228 ] 00:06:51.658 { 00:06:51.658 "subsystems": [ 00:06:51.658 { 00:06:51.658 "subsystem": "bdev", 00:06:51.658 "config": [ 00:06:51.658 { 00:06:51.658 "params": { 00:06:51.658 "block_size": 512, 00:06:51.658 "num_blocks": 512, 00:06:51.658 "name": "malloc0" 00:06:51.658 }, 00:06:51.658 "method": "bdev_malloc_create" 00:06:51.658 }, 00:06:51.658 { 00:06:51.658 "method": "bdev_wait_for_examine" 00:06:51.658 } 00:06:51.658 ] 00:06:51.658 } 00:06:51.658 ] 00:06:51.658 } 00:06:51.916 [2024-12-09 10:49:44.852068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.916 [2024-12-09 10:49:44.900018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.916 [2024-12-09 10:49:44.943417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.916 [2024-12-09 10:49:44.992515] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:51.916 [2024-12-09 10:49:44.992568] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.916 [2024-12-09 10:49:45.091053] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.172 00:06:52.172 real 0m0.547s 00:06:52.172 user 0m0.368s 00:06:52.172 sys 0m0.137s 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:52.172 ************************************ 00:06:52.172 END TEST dd_invalid_output_count 00:06:52.172 ************************************ 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.172 ************************************ 00:06:52.172 START TEST dd_bs_not_multiple 00:06:52.172 ************************************ 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.172 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:52.172 [2024-12-09 10:49:45.326110] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:52.172 [2024-12-09 10:49:45.326172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62254 ] 00:06:52.172 { 00:06:52.172 "subsystems": [ 00:06:52.172 { 00:06:52.172 "subsystem": "bdev", 00:06:52.172 "config": [ 00:06:52.172 { 00:06:52.172 "params": { 00:06:52.172 "block_size": 512, 00:06:52.172 "num_blocks": 512, 00:06:52.172 "name": "malloc0" 00:06:52.172 }, 00:06:52.172 "method": "bdev_malloc_create" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "params": { 00:06:52.172 "block_size": 512, 00:06:52.172 "num_blocks": 512, 00:06:52.172 "name": "malloc1" 00:06:52.172 }, 00:06:52.172 "method": "bdev_malloc_create" 00:06:52.172 }, 00:06:52.172 { 00:06:52.172 "method": "bdev_wait_for_examine" 00:06:52.172 } 00:06:52.172 ] 00:06:52.172 } 00:06:52.172 ] 00:06:52.172 } 00:06:52.430 [2024-12-09 10:49:45.463784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.430 [2024-12-09 10:49:45.520099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.430 [2024-12-09 10:49:45.561653] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.686 [2024-12-09 10:49:45.617461] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:52.686 [2024-12-09 10:49:45.617515] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.686 [2024-12-09 10:49:45.715535] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.686 00:06:52.686 real 0m0.554s 00:06:52.686 user 0m0.372s 00:06:52.686 sys 0m0.149s 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.686 10:49:45 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:52.686 ************************************ 00:06:52.686 END TEST dd_bs_not_multiple 00:06:52.686 ************************************ 00:06:52.945 00:06:52.945 real 0m6.673s 00:06:52.945 user 0m3.609s 00:06:52.945 sys 0m2.563s 00:06:52.945 10:49:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.945 10:49:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:52.945 ************************************ 00:06:52.945 END TEST spdk_dd_negative 00:06:52.945 ************************************ 00:06:52.945 00:06:52.945 real 1m16.592s 00:06:52.945 user 0m50.028s 00:06:52.945 sys 0m31.642s 00:06:52.945 10:49:45 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.945 10:49:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:52.945 ************************************ 00:06:52.945 END TEST spdk_dd 00:06:52.945 ************************************ 00:06:52.945 10:49:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:52.945 10:49:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:52.945 10:49:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:52.945 10:49:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:52.945 10:49:45 -- common/autotest_common.sh@10 -- # set +x 00:06:52.945 10:49:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:52.945 10:49:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:52.945 10:49:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:52.945 10:49:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:52.945 10:49:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:52.945 10:49:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:52.945 10:49:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.945 10:49:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.945 10:49:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.945 10:49:46 -- common/autotest_common.sh@10 -- # set +x 00:06:52.945 ************************************ 00:06:52.945 START TEST nvmf_tcp 00:06:52.945 ************************************ 00:06:52.945 10:49:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.205 * Looking for test storage... 00:06:53.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.205 10:49:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.205 --rc genhtml_branch_coverage=1 00:06:53.205 --rc genhtml_function_coverage=1 00:06:53.205 --rc genhtml_legend=1 00:06:53.205 --rc geninfo_all_blocks=1 00:06:53.205 --rc geninfo_unexecuted_blocks=1 00:06:53.205 00:06:53.205 ' 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.205 --rc genhtml_branch_coverage=1 00:06:53.205 --rc genhtml_function_coverage=1 00:06:53.205 --rc genhtml_legend=1 00:06:53.205 --rc geninfo_all_blocks=1 00:06:53.205 --rc geninfo_unexecuted_blocks=1 00:06:53.205 00:06:53.205 ' 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.205 --rc genhtml_branch_coverage=1 00:06:53.205 --rc genhtml_function_coverage=1 00:06:53.205 --rc genhtml_legend=1 00:06:53.205 --rc geninfo_all_blocks=1 00:06:53.205 --rc geninfo_unexecuted_blocks=1 00:06:53.205 00:06:53.205 ' 00:06:53.205 10:49:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.205 --rc genhtml_branch_coverage=1 00:06:53.205 --rc genhtml_function_coverage=1 00:06:53.205 --rc genhtml_legend=1 00:06:53.205 --rc geninfo_all_blocks=1 00:06:53.205 --rc geninfo_unexecuted_blocks=1 00:06:53.205 00:06:53.205 ' 00:06:53.205 10:49:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:53.206 10:49:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:53.206 10:49:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:53.206 10:49:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.206 10:49:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.206 10:49:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.206 ************************************ 00:06:53.206 START TEST nvmf_target_core 00:06:53.206 ************************************ 00:06:53.206 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:53.206 * Looking for test storage... 00:06:53.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:53.463 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.464 --rc genhtml_branch_coverage=1 00:06:53.464 --rc genhtml_function_coverage=1 00:06:53.464 --rc genhtml_legend=1 00:06:53.464 --rc geninfo_all_blocks=1 00:06:53.464 --rc geninfo_unexecuted_blocks=1 00:06:53.464 00:06:53.464 ' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.464 --rc genhtml_branch_coverage=1 00:06:53.464 --rc genhtml_function_coverage=1 00:06:53.464 --rc genhtml_legend=1 00:06:53.464 --rc geninfo_all_blocks=1 00:06:53.464 --rc geninfo_unexecuted_blocks=1 00:06:53.464 00:06:53.464 ' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.464 --rc genhtml_branch_coverage=1 00:06:53.464 --rc genhtml_function_coverage=1 00:06:53.464 --rc genhtml_legend=1 00:06:53.464 --rc geninfo_all_blocks=1 00:06:53.464 --rc geninfo_unexecuted_blocks=1 00:06:53.464 00:06:53.464 ' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.464 --rc genhtml_branch_coverage=1 00:06:53.464 --rc genhtml_function_coverage=1 00:06:53.464 --rc genhtml_legend=1 00:06:53.464 --rc geninfo_all_blocks=1 00:06:53.464 --rc geninfo_unexecuted_blocks=1 00:06:53.464 00:06:53.464 ' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.464 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:53.464 ************************************ 00:06:53.464 START TEST nvmf_host_management 00:06:53.464 ************************************ 00:06:53.464 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:53.720 * Looking for test storage... 00:06:53.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.720 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.721 --rc genhtml_branch_coverage=1 00:06:53.721 --rc genhtml_function_coverage=1 00:06:53.721 --rc genhtml_legend=1 00:06:53.721 --rc geninfo_all_blocks=1 00:06:53.721 --rc geninfo_unexecuted_blocks=1 00:06:53.721 00:06:53.721 ' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.721 --rc genhtml_branch_coverage=1 00:06:53.721 --rc genhtml_function_coverage=1 00:06:53.721 --rc genhtml_legend=1 00:06:53.721 --rc geninfo_all_blocks=1 00:06:53.721 --rc geninfo_unexecuted_blocks=1 00:06:53.721 00:06:53.721 ' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.721 --rc genhtml_branch_coverage=1 00:06:53.721 --rc genhtml_function_coverage=1 00:06:53.721 --rc genhtml_legend=1 00:06:53.721 --rc geninfo_all_blocks=1 00:06:53.721 --rc geninfo_unexecuted_blocks=1 00:06:53.721 00:06:53.721 ' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:53.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.721 --rc genhtml_branch_coverage=1 00:06:53.721 --rc genhtml_function_coverage=1 00:06:53.721 --rc genhtml_legend=1 00:06:53.721 --rc geninfo_all_blocks=1 00:06:53.721 --rc geninfo_unexecuted_blocks=1 00:06:53.721 00:06:53.721 ' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:53.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:53.721 Cannot find device "nvmf_init_br" 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:53.721 Cannot find device "nvmf_init_br2" 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:53.721 Cannot find device "nvmf_tgt_br" 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:53.721 Cannot find device "nvmf_tgt_br2" 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:53.721 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:53.978 Cannot find device "nvmf_init_br" 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:53.978 Cannot find device "nvmf_init_br2" 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:53.978 Cannot find device "nvmf_tgt_br" 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:53.978 Cannot find device "nvmf_tgt_br2" 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:53.978 Cannot find device "nvmf_br" 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:53.978 Cannot find device "nvmf_init_if" 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:53.978 10:49:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:53.978 Cannot find device "nvmf_init_if2" 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:53.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:53.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:53.978 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:54.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:54.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.170 ms 00:06:54.236 00:06:54.236 --- 10.0.0.3 ping statistics --- 00:06:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.236 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:54.236 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:54.236 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:06:54.236 00:06:54.236 --- 10.0.0.4 ping statistics --- 00:06:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.236 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:54.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:06:54.236 00:06:54.236 --- 10.0.0.1 ping statistics --- 00:06:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.236 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:54.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:06:54.236 00:06:54.236 --- 10.0.0.2 ping statistics --- 00:06:54.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.236 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:54.236 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62595 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62595 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62595 ']' 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.237 10:49:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.495 [2024-12-09 10:49:47.424037] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:54.495 [2024-12-09 10:49:47.424125] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.495 [2024-12-09 10:49:47.564362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.495 [2024-12-09 10:49:47.619237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.495 [2024-12-09 10:49:47.619281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.495 [2024-12-09 10:49:47.619288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.495 [2024-12-09 10:49:47.619294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.495 [2024-12-09 10:49:47.619299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.495 [2024-12-09 10:49:47.620439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.495 [2024-12-09 10:49:47.620538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.495 [2024-12-09 10:49:47.620726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.495 [2024-12-09 10:49:47.620731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:54.495 [2024-12-09 10:49:47.664048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.433 [2024-12-09 10:49:48.383358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.433 Malloc0 00:06:55.433 [2024-12-09 10:49:48.459275] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62649 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62649 /var/tmp/bdevperf.sock 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62649 ']' 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.433 { 00:06:55.433 "params": { 00:06:55.433 "name": "Nvme$subsystem", 00:06:55.433 "trtype": "$TEST_TRANSPORT", 00:06:55.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.433 "adrfam": "ipv4", 00:06:55.433 "trsvcid": "$NVMF_PORT", 00:06:55.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.433 "hdgst": ${hdgst:-false}, 00:06:55.433 "ddgst": ${ddgst:-false} 00:06:55.433 }, 00:06:55.433 "method": "bdev_nvme_attach_controller" 00:06:55.433 } 00:06:55.433 EOF 00:06:55.433 )") 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:55.433 10:49:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.433 "params": { 00:06:55.433 "name": "Nvme0", 00:06:55.433 "trtype": "tcp", 00:06:55.433 "traddr": "10.0.0.3", 00:06:55.433 "adrfam": "ipv4", 00:06:55.433 "trsvcid": "4420", 00:06:55.433 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.433 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:55.433 "hdgst": false, 00:06:55.433 "ddgst": false 00:06:55.433 }, 00:06:55.433 "method": "bdev_nvme_attach_controller" 00:06:55.433 }' 00:06:55.433 [2024-12-09 10:49:48.570536] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:55.433 [2024-12-09 10:49:48.570603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:06:55.691 [2024-12-09 10:49:48.801761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.950 [2024-12-09 10:49:48.877452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.950 [2024-12-09 10:49:48.930038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.950 Running I/O for 10 seconds... 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.520 10:49:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:56.520 [2024-12-09 10:49:49.605496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.520 [2024-12-09 10:49:49.605912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.520 [2024-12-09 10:49:49.605918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.605925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.605931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.605939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.605945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.605952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.605958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.605967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.605973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.605981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.605986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.605994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:56.521 [2024-12-09 10:49:49.606461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.521 [2024-12-09 10:49:49.606469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6bc00 is same with the state(6) to be set 00:06:56.522 [2024-12-09 10:49:49.606614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.522 [2024-12-09 10:49:49.606629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.522 [2024-12-09 10:49:49.606638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.522 [2024-12-09 10:49:49.606644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.522 [2024-12-09 10:49:49.606652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.522 [2024-12-09 10:49:49.606658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.522 [2024-12-09 10:49:49.606665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:56.522 [2024-12-09 10:49:49.606671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:56.522 [2024-12-09 10:49:49.606677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6cce0 is same with the state(6) to be set 00:06:56.522 [2024-12-09 10:49:49.607756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:56.522 task offset: 122880 on job bdev=Nvme0n1 fails 00:06:56.522 00:06:56.522 Latency(us) 00:06:56.522 [2024-12-09T10:49:49.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:56.522 Job: Nvme0n1 ended in about 0.56 seconds with error 00:06:56.522 Verification LBA range: start 0x0 length 0x400 00:06:56.522 Nvme0n1 : 0.56 1701.56 106.35 113.44 0.00 34384.59 1645.55 34570.96 00:06:56.522 [2024-12-09T10:49:49.701Z] =================================================================================================================== 00:06:56.522 [2024-12-09T10:49:49.701Z] Total : 1701.56 106.35 113.44 0.00 34384.59 1645.55 34570.96 00:06:56.522 [2024-12-09 10:49:49.610039] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.522 [2024-12-09 10:49:49.610067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6cce0 (9): Bad file descriptor 00:06:56.522 [2024-12-09 10:49:49.619306] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62649 00:06:57.459 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62649) - No such process 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:57.459 { 00:06:57.459 "params": { 00:06:57.459 "name": "Nvme$subsystem", 00:06:57.459 "trtype": "$TEST_TRANSPORT", 00:06:57.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:57.459 "adrfam": "ipv4", 00:06:57.459 "trsvcid": "$NVMF_PORT", 00:06:57.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:57.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:57.459 "hdgst": ${hdgst:-false}, 00:06:57.459 "ddgst": ${ddgst:-false} 00:06:57.459 }, 00:06:57.459 "method": "bdev_nvme_attach_controller" 00:06:57.459 } 00:06:57.459 EOF 00:06:57.459 )") 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:57.459 10:49:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:57.459 "params": { 00:06:57.459 "name": "Nvme0", 00:06:57.459 "trtype": "tcp", 00:06:57.459 "traddr": "10.0.0.3", 00:06:57.459 "adrfam": "ipv4", 00:06:57.459 "trsvcid": "4420", 00:06:57.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:57.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:57.459 "hdgst": false, 00:06:57.459 "ddgst": false 00:06:57.459 }, 00:06:57.459 "method": "bdev_nvme_attach_controller" 00:06:57.459 }' 00:06:57.718 [2024-12-09 10:49:50.669929] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:06:57.718 [2024-12-09 10:49:50.670007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62692 ] 00:06:57.718 [2024-12-09 10:49:50.821138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.718 [2024-12-09 10:49:50.878874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.977 [2024-12-09 10:49:50.931431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.977 Running I/O for 1 seconds... 00:06:58.913 1728.00 IOPS, 108.00 MiB/s 00:06:58.913 Latency(us) 00:06:58.913 [2024-12-09T10:49:52.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.913 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:58.913 Verification LBA range: start 0x0 length 0x400 00:06:58.913 Nvme0n1 : 1.00 1785.03 111.56 0.00 0.00 35218.25 5351.63 32052.54 00:06:58.913 [2024-12-09T10:49:52.092Z] =================================================================================================================== 00:06:58.913 [2024-12-09T10:49:52.092Z] Total : 1785.03 111.56 0.00 0.00 35218.25 5351.63 32052.54 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:59.172 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:59.431 rmmod nvme_tcp 00:06:59.431 rmmod nvme_fabrics 00:06:59.431 rmmod nvme_keyring 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62595 ']' 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62595 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62595 ']' 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62595 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62595 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.431 killing process with pid 62595 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62595' 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62595 00:06:59.431 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62595 00:06:59.690 [2024-12-09 10:49:52.712342] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:59.690 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:59.950 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:59.950 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:59.950 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:59.950 10:49:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:59.950 00:06:59.950 real 0m6.506s 00:06:59.950 user 0m23.173s 00:06:59.950 sys 0m1.702s 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 ************************************ 00:06:59.950 END TEST nvmf_host_management 00:06:59.950 ************************************ 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.950 10:49:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.950 ************************************ 00:06:59.950 START TEST nvmf_lvol 00:06:59.950 ************************************ 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:00.209 * Looking for test storage... 00:07:00.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.209 --rc genhtml_branch_coverage=1 00:07:00.209 --rc genhtml_function_coverage=1 00:07:00.209 --rc genhtml_legend=1 00:07:00.209 --rc geninfo_all_blocks=1 00:07:00.209 --rc geninfo_unexecuted_blocks=1 00:07:00.209 00:07:00.209 ' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.209 --rc genhtml_branch_coverage=1 00:07:00.209 --rc genhtml_function_coverage=1 00:07:00.209 --rc genhtml_legend=1 00:07:00.209 --rc geninfo_all_blocks=1 00:07:00.209 --rc geninfo_unexecuted_blocks=1 00:07:00.209 00:07:00.209 ' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.209 --rc genhtml_branch_coverage=1 00:07:00.209 --rc genhtml_function_coverage=1 00:07:00.209 --rc genhtml_legend=1 00:07:00.209 --rc geninfo_all_blocks=1 00:07:00.209 --rc geninfo_unexecuted_blocks=1 00:07:00.209 00:07:00.209 ' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.209 --rc genhtml_branch_coverage=1 00:07:00.209 --rc genhtml_function_coverage=1 00:07:00.209 --rc genhtml_legend=1 00:07:00.209 --rc geninfo_all_blocks=1 00:07:00.209 --rc geninfo_unexecuted_blocks=1 00:07:00.209 00:07:00.209 ' 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.209 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.210 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:00.469 Cannot find device "nvmf_init_br" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:00.469 Cannot find device "nvmf_init_br2" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:00.469 Cannot find device "nvmf_tgt_br" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.469 Cannot find device "nvmf_tgt_br2" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:00.469 Cannot find device "nvmf_init_br" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:00.469 Cannot find device "nvmf_init_br2" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:00.469 Cannot find device "nvmf_tgt_br" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:00.469 Cannot find device "nvmf_tgt_br2" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:00.469 Cannot find device "nvmf_br" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:00.469 Cannot find device "nvmf_init_if" 00:07:00.469 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:00.470 Cannot find device "nvmf_init_if2" 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.470 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.470 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:00.729 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:00.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:00.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:07:00.730 00:07:00.730 --- 10.0.0.3 ping statistics --- 00:07:00.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.730 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:00.730 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:00.730 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.116 ms 00:07:00.730 00:07:00.730 --- 10.0.0.4 ping statistics --- 00:07:00.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.730 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:00.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:00.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:07:00.730 00:07:00.730 --- 10.0.0.1 ping statistics --- 00:07:00.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.730 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:00.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:00.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:07:00.730 00:07:00.730 --- 10.0.0.2 ping statistics --- 00:07:00.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:00.730 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62971 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62971 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 62971 ']' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.730 10:49:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.990 [2024-12-09 10:49:53.960383] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:00.990 [2024-12-09 10:49:53.960459] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.990 [2024-12-09 10:49:54.117290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.249 [2024-12-09 10:49:54.200929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.249 [2024-12-09 10:49:54.200999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.249 [2024-12-09 10:49:54.201007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.249 [2024-12-09 10:49:54.201013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.249 [2024-12-09 10:49:54.201018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.249 [2024-12-09 10:49:54.202366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.249 [2024-12-09 10:49:54.202479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.249 [2024-12-09 10:49:54.202481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.249 [2024-12-09 10:49:54.282836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.817 10:49:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.075 [2024-12-09 10:49:55.201524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.075 10:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:02.333 10:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:02.333 10:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:02.902 10:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:02.902 10:49:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:02.902 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:03.471 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ef3d7ca1-385e-4166-8100-2bd441eda2ad 00:07:03.471 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ef3d7ca1-385e-4166-8100-2bd441eda2ad lvol 20 00:07:03.471 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=540e0d55-d9a9-4068-8b04-fdc9a3cb4240 00:07:03.471 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:03.729 10:49:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 540e0d55-d9a9-4068-8b04-fdc9a3cb4240 00:07:04.297 10:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:04.297 [2024-12-09 10:49:57.427388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:04.297 10:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:04.556 10:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63046 00:07:04.556 10:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:04.556 10:49:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:05.500 10:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 540e0d55-d9a9-4068-8b04-fdc9a3cb4240 MY_SNAPSHOT 00:07:06.068 10:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=63b360a8-d9f3-497d-967f-3db44b1ed987 00:07:06.068 10:49:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 540e0d55-d9a9-4068-8b04-fdc9a3cb4240 30 00:07:06.328 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 63b360a8-d9f3-497d-967f-3db44b1ed987 MY_CLONE 00:07:06.589 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=167c1db8-cc0c-41b1-98e5-71d7c8e652fb 00:07:06.589 10:49:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 167c1db8-cc0c-41b1-98e5-71d7c8e652fb 00:07:07.157 10:50:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63046 00:07:15.275 Initializing NVMe Controllers 00:07:15.275 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:15.275 Controller IO queue size 128, less than required. 00:07:15.275 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.275 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:15.275 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:15.275 Initialization complete. Launching workers. 00:07:15.275 ======================================================== 00:07:15.275 Latency(us) 00:07:15.275 Device Information : IOPS MiB/s Average min max 00:07:15.275 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8414.40 32.87 15211.58 2615.46 81581.15 00:07:15.275 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8548.70 33.39 14982.34 693.42 103277.14 00:07:15.275 ======================================================== 00:07:15.275 Total : 16963.10 66.26 15096.05 693.42 103277.14 00:07:15.275 00:07:15.275 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:15.275 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 540e0d55-d9a9-4068-8b04-fdc9a3cb4240 00:07:15.533 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef3d7ca1-385e-4166-8100-2bd441eda2ad 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.791 rmmod nvme_tcp 00:07:15.791 rmmod nvme_fabrics 00:07:15.791 rmmod nvme_keyring 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62971 ']' 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62971 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 62971 ']' 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 62971 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62971 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.791 killing process with pid 62971 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62971' 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 62971 00:07:15.791 10:50:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 62971 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:16.049 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:16.308 00:07:16.308 real 0m16.313s 00:07:16.308 user 1m6.374s 00:07:16.308 sys 0m3.911s 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.308 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.308 ************************************ 00:07:16.308 END TEST nvmf_lvol 00:07:16.308 ************************************ 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:16.566 ************************************ 00:07:16.566 START TEST nvmf_lvs_grow 00:07:16.566 ************************************ 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:16.566 * Looking for test storage... 00:07:16.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:16.566 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.567 --rc genhtml_branch_coverage=1 00:07:16.567 --rc genhtml_function_coverage=1 00:07:16.567 --rc genhtml_legend=1 00:07:16.567 --rc geninfo_all_blocks=1 00:07:16.567 --rc geninfo_unexecuted_blocks=1 00:07:16.567 00:07:16.567 ' 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.567 --rc genhtml_branch_coverage=1 00:07:16.567 --rc genhtml_function_coverage=1 00:07:16.567 --rc genhtml_legend=1 00:07:16.567 --rc geninfo_all_blocks=1 00:07:16.567 --rc geninfo_unexecuted_blocks=1 00:07:16.567 00:07:16.567 ' 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.567 --rc genhtml_branch_coverage=1 00:07:16.567 --rc genhtml_function_coverage=1 00:07:16.567 --rc genhtml_legend=1 00:07:16.567 --rc geninfo_all_blocks=1 00:07:16.567 --rc geninfo_unexecuted_blocks=1 00:07:16.567 00:07:16.567 ' 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.567 --rc genhtml_branch_coverage=1 00:07:16.567 --rc genhtml_function_coverage=1 00:07:16.567 --rc genhtml_legend=1 00:07:16.567 --rc geninfo_all_blocks=1 00:07:16.567 --rc geninfo_unexecuted_blocks=1 00:07:16.567 00:07:16.567 ' 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.567 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:16.826 Cannot find device "nvmf_init_br" 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:16.826 Cannot find device "nvmf_init_br2" 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:16.826 Cannot find device "nvmf_tgt_br" 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.826 Cannot find device "nvmf_tgt_br2" 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:16.826 Cannot find device "nvmf_init_br" 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:16.826 Cannot find device "nvmf_init_br2" 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:16.826 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:16.826 Cannot find device "nvmf_tgt_br" 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:16.827 Cannot find device "nvmf_tgt_br2" 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:16.827 Cannot find device "nvmf_br" 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:16.827 Cannot find device "nvmf_init_if" 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:16.827 Cannot find device "nvmf_init_if2" 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.827 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:16.827 10:50:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:17.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:17.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.164 ms 00:07:17.086 00:07:17.086 --- 10.0.0.3 ping statistics --- 00:07:17.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.086 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:17.086 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:17.086 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.135 ms 00:07:17.086 00:07:17.086 --- 10.0.0.4 ping statistics --- 00:07:17.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.086 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:17.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:17.086 00:07:17.086 --- 10.0.0.1 ping statistics --- 00:07:17.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.086 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:17.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:07:17.086 00:07:17.086 --- 10.0.0.2 ping statistics --- 00:07:17.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.086 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63433 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63433 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63433 ']' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:17.086 10:50:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:17.344 [2024-12-09 10:50:10.267909] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:17.344 [2024-12-09 10:50:10.267986] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.344 [2024-12-09 10:50:10.423684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.344 [2024-12-09 10:50:10.477872] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.344 [2024-12-09 10:50:10.477918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.344 [2024-12-09 10:50:10.477925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.344 [2024-12-09 10:50:10.477929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.344 [2024-12-09 10:50:10.477934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.344 [2024-12-09 10:50:10.478240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.344 [2024-12-09 10:50:10.519451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.317 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:18.575 [2024-12-09 10:50:11.505357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:18.575 ************************************ 00:07:18.575 START TEST lvs_grow_clean 00:07:18.575 ************************************ 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:18.575 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.833 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:18.833 10:50:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:18.833 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:19.091 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:19.091 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:19.091 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:19.091 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:19.091 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d lvol 150 00:07:19.349 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4a0a10fd-67fa-4128-a089-c182ce53dc7c 00:07:19.349 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:19.349 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:19.607 [2024-12-09 10:50:12.632659] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:19.607 [2024-12-09 10:50:12.632723] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:19.607 true 00:07:19.607 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:19.607 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:19.865 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:19.865 10:50:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:20.123 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4a0a10fd-67fa-4128-a089-c182ce53dc7c 00:07:20.381 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:20.638 [2024-12-09 10:50:13.559367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63511 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63511 /var/tmp/bdevperf.sock 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63511 ']' 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.638 10:50:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:20.895 [2024-12-09 10:50:13.856032] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:20.895 [2024-12-09 10:50:13.856125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63511 ] 00:07:20.895 [2024-12-09 10:50:14.008315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.895 [2024-12-09 10:50:14.065086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.152 [2024-12-09 10:50:14.107477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.717 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.717 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:21.717 10:50:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:21.974 Nvme0n1 00:07:21.974 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:22.231 [ 00:07:22.231 { 00:07:22.231 "name": "Nvme0n1", 00:07:22.231 "aliases": [ 00:07:22.231 "4a0a10fd-67fa-4128-a089-c182ce53dc7c" 00:07:22.231 ], 00:07:22.231 "product_name": "NVMe disk", 00:07:22.231 "block_size": 4096, 00:07:22.231 "num_blocks": 38912, 00:07:22.231 "uuid": "4a0a10fd-67fa-4128-a089-c182ce53dc7c", 00:07:22.231 "numa_id": -1, 00:07:22.231 "assigned_rate_limits": { 00:07:22.231 "rw_ios_per_sec": 0, 00:07:22.231 "rw_mbytes_per_sec": 0, 00:07:22.231 "r_mbytes_per_sec": 0, 00:07:22.231 "w_mbytes_per_sec": 0 00:07:22.231 }, 00:07:22.231 "claimed": false, 00:07:22.231 "zoned": false, 00:07:22.231 "supported_io_types": { 00:07:22.231 "read": true, 00:07:22.231 "write": true, 00:07:22.231 "unmap": true, 00:07:22.231 "flush": true, 00:07:22.231 "reset": true, 00:07:22.231 "nvme_admin": true, 00:07:22.231 "nvme_io": true, 00:07:22.231 "nvme_io_md": false, 00:07:22.231 "write_zeroes": true, 00:07:22.231 "zcopy": false, 00:07:22.231 "get_zone_info": false, 00:07:22.231 "zone_management": false, 00:07:22.231 "zone_append": false, 00:07:22.231 "compare": true, 00:07:22.231 "compare_and_write": true, 00:07:22.231 "abort": true, 00:07:22.231 "seek_hole": false, 00:07:22.231 "seek_data": false, 00:07:22.231 "copy": true, 00:07:22.231 "nvme_iov_md": false 00:07:22.231 }, 00:07:22.231 "memory_domains": [ 00:07:22.231 { 00:07:22.231 "dma_device_id": "system", 00:07:22.231 "dma_device_type": 1 00:07:22.231 } 00:07:22.231 ], 00:07:22.231 "driver_specific": { 00:07:22.231 "nvme": [ 00:07:22.231 { 00:07:22.231 "trid": { 00:07:22.231 "trtype": "TCP", 00:07:22.231 "adrfam": "IPv4", 00:07:22.231 "traddr": "10.0.0.3", 00:07:22.231 "trsvcid": "4420", 00:07:22.231 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:22.231 }, 00:07:22.231 "ctrlr_data": { 00:07:22.231 "cntlid": 1, 00:07:22.231 "vendor_id": "0x8086", 00:07:22.231 "model_number": "SPDK bdev Controller", 00:07:22.231 "serial_number": "SPDK0", 00:07:22.231 "firmware_revision": "25.01", 00:07:22.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.231 "oacs": { 00:07:22.231 "security": 0, 00:07:22.231 "format": 0, 00:07:22.231 "firmware": 0, 00:07:22.231 "ns_manage": 0 00:07:22.231 }, 00:07:22.231 "multi_ctrlr": true, 00:07:22.231 "ana_reporting": false 00:07:22.231 }, 00:07:22.231 "vs": { 00:07:22.231 "nvme_version": "1.3" 00:07:22.231 }, 00:07:22.231 "ns_data": { 00:07:22.231 "id": 1, 00:07:22.231 "can_share": true 00:07:22.231 } 00:07:22.231 } 00:07:22.231 ], 00:07:22.231 "mp_policy": "active_passive" 00:07:22.231 } 00:07:22.231 } 00:07:22.231 ] 00:07:22.231 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63535 00:07:22.231 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:22.231 10:50:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:22.488 Running I/O for 10 seconds... 00:07:23.424 Latency(us) 00:07:23.424 [2024-12-09T10:50:16.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.424 Nvme0n1 : 1.00 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:07:23.424 [2024-12-09T10:50:16.603Z] =================================================================================================================== 00:07:23.424 [2024-12-09T10:50:16.603Z] Total : 9144.00 35.72 0.00 0.00 0.00 0.00 0.00 00:07:23.424 00:07:24.363 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:24.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.363 Nvme0n1 : 2.00 9056.50 35.38 0.00 0.00 0.00 0.00 0.00 00:07:24.363 [2024-12-09T10:50:17.542Z] =================================================================================================================== 00:07:24.363 [2024-12-09T10:50:17.542Z] Total : 9056.50 35.38 0.00 0.00 0.00 0.00 0.00 00:07:24.363 00:07:24.621 true 00:07:24.621 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:24.621 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:24.881 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:24.881 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:24.881 10:50:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63535 00:07:25.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.448 Nvme0n1 : 3.00 8969.67 35.04 0.00 0.00 0.00 0.00 0.00 00:07:25.448 [2024-12-09T10:50:18.627Z] =================================================================================================================== 00:07:25.448 [2024-12-09T10:50:18.627Z] Total : 8969.67 35.04 0.00 0.00 0.00 0.00 0.00 00:07:25.448 00:07:26.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.383 Nvme0n1 : 4.00 9042.25 35.32 0.00 0.00 0.00 0.00 0.00 00:07:26.383 [2024-12-09T10:50:19.562Z] =================================================================================================================== 00:07:26.383 [2024-12-09T10:50:19.562Z] Total : 9042.25 35.32 0.00 0.00 0.00 0.00 0.00 00:07:26.383 00:07:27.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.320 Nvme0n1 : 5.00 9062.60 35.40 0.00 0.00 0.00 0.00 0.00 00:07:27.320 [2024-12-09T10:50:20.499Z] =================================================================================================================== 00:07:27.320 [2024-12-09T10:50:20.499Z] Total : 9062.60 35.40 0.00 0.00 0.00 0.00 0.00 00:07:27.320 00:07:28.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.698 Nvme0n1 : 6.00 9055.00 35.37 0.00 0.00 0.00 0.00 0.00 00:07:28.698 [2024-12-09T10:50:21.877Z] =================================================================================================================== 00:07:28.698 [2024-12-09T10:50:21.877Z] Total : 9055.00 35.37 0.00 0.00 0.00 0.00 0.00 00:07:28.698 00:07:29.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.634 Nvme0n1 : 7.00 9049.57 35.35 0.00 0.00 0.00 0.00 0.00 00:07:29.634 [2024-12-09T10:50:22.813Z] =================================================================================================================== 00:07:29.634 [2024-12-09T10:50:22.813Z] Total : 9049.57 35.35 0.00 0.00 0.00 0.00 0.00 00:07:29.634 00:07:30.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.591 Nvme0n1 : 8.00 9029.62 35.27 0.00 0.00 0.00 0.00 0.00 00:07:30.591 [2024-12-09T10:50:23.770Z] =================================================================================================================== 00:07:30.591 [2024-12-09T10:50:23.770Z] Total : 9029.62 35.27 0.00 0.00 0.00 0.00 0.00 00:07:30.591 00:07:31.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.528 Nvme0n1 : 9.00 9042.33 35.32 0.00 0.00 0.00 0.00 0.00 00:07:31.528 [2024-12-09T10:50:24.707Z] =================================================================================================================== 00:07:31.528 [2024-12-09T10:50:24.707Z] Total : 9042.33 35.32 0.00 0.00 0.00 0.00 0.00 00:07:31.528 00:07:32.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.470 Nvme0n1 : 10.00 9039.80 35.31 0.00 0.00 0.00 0.00 0.00 00:07:32.470 [2024-12-09T10:50:25.649Z] =================================================================================================================== 00:07:32.470 [2024-12-09T10:50:25.649Z] Total : 9039.80 35.31 0.00 0.00 0.00 0.00 0.00 00:07:32.470 00:07:32.470 00:07:32.470 Latency(us) 00:07:32.470 [2024-12-09T10:50:25.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.470 Nvme0n1 : 10.00 9048.24 35.34 0.00 0.00 14142.34 8242.08 65936.66 00:07:32.470 [2024-12-09T10:50:25.649Z] =================================================================================================================== 00:07:32.470 [2024-12-09T10:50:25.649Z] Total : 9048.24 35.34 0.00 0.00 14142.34 8242.08 65936.66 00:07:32.470 { 00:07:32.470 "results": [ 00:07:32.470 { 00:07:32.470 "job": "Nvme0n1", 00:07:32.470 "core_mask": "0x2", 00:07:32.470 "workload": "randwrite", 00:07:32.470 "status": "finished", 00:07:32.470 "queue_depth": 128, 00:07:32.470 "io_size": 4096, 00:07:32.470 "runtime": 10.004824, 00:07:32.470 "iops": 9048.235131372627, 00:07:32.470 "mibps": 35.34466848192432, 00:07:32.470 "io_failed": 0, 00:07:32.470 "io_timeout": 0, 00:07:32.470 "avg_latency_us": 14142.34484744039, 00:07:32.470 "min_latency_us": 8242.08209606987, 00:07:32.470 "max_latency_us": 65936.65676855895 00:07:32.470 } 00:07:32.470 ], 00:07:32.470 "core_count": 1 00:07:32.470 } 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63511 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63511 ']' 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63511 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63511 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:32.470 killing process with pid 63511 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63511' 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63511 00:07:32.470 Received shutdown signal, test time was about 10.000000 seconds 00:07:32.470 00:07:32.470 Latency(us) 00:07:32.470 [2024-12-09T10:50:25.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.470 [2024-12-09T10:50:25.649Z] =================================================================================================================== 00:07:32.470 [2024-12-09T10:50:25.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:32.470 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63511 00:07:32.729 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:32.987 10:50:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:33.246 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:33.246 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:33.506 [2024-12-09 10:50:26.634293] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:33.506 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:33.764 request: 00:07:33.764 { 00:07:33.764 "uuid": "2224e149-e11d-4d12-bbfe-5a7468a63c3d", 00:07:33.764 "method": "bdev_lvol_get_lvstores", 00:07:33.764 "req_id": 1 00:07:33.764 } 00:07:33.764 Got JSON-RPC error response 00:07:33.764 response: 00:07:33.764 { 00:07:33.764 "code": -19, 00:07:33.764 "message": "No such device" 00:07:33.764 } 00:07:33.764 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:33.764 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:33.764 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:33.764 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:33.764 10:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:34.330 aio_bdev 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4a0a10fd-67fa-4128-a089-c182ce53dc7c 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4a0a10fd-67fa-4128-a089-c182ce53dc7c 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:34.330 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4a0a10fd-67fa-4128-a089-c182ce53dc7c -t 2000 00:07:34.589 [ 00:07:34.589 { 00:07:34.589 "name": "4a0a10fd-67fa-4128-a089-c182ce53dc7c", 00:07:34.589 "aliases": [ 00:07:34.589 "lvs/lvol" 00:07:34.589 ], 00:07:34.589 "product_name": "Logical Volume", 00:07:34.589 "block_size": 4096, 00:07:34.589 "num_blocks": 38912, 00:07:34.589 "uuid": "4a0a10fd-67fa-4128-a089-c182ce53dc7c", 00:07:34.589 "assigned_rate_limits": { 00:07:34.589 "rw_ios_per_sec": 0, 00:07:34.589 "rw_mbytes_per_sec": 0, 00:07:34.589 "r_mbytes_per_sec": 0, 00:07:34.589 "w_mbytes_per_sec": 0 00:07:34.589 }, 00:07:34.589 "claimed": false, 00:07:34.589 "zoned": false, 00:07:34.589 "supported_io_types": { 00:07:34.589 "read": true, 00:07:34.589 "write": true, 00:07:34.589 "unmap": true, 00:07:34.589 "flush": false, 00:07:34.589 "reset": true, 00:07:34.589 "nvme_admin": false, 00:07:34.589 "nvme_io": false, 00:07:34.589 "nvme_io_md": false, 00:07:34.589 "write_zeroes": true, 00:07:34.589 "zcopy": false, 00:07:34.589 "get_zone_info": false, 00:07:34.589 "zone_management": false, 00:07:34.589 "zone_append": false, 00:07:34.589 "compare": false, 00:07:34.589 "compare_and_write": false, 00:07:34.589 "abort": false, 00:07:34.589 "seek_hole": true, 00:07:34.589 "seek_data": true, 00:07:34.589 "copy": false, 00:07:34.589 "nvme_iov_md": false 00:07:34.589 }, 00:07:34.589 "driver_specific": { 00:07:34.589 "lvol": { 00:07:34.589 "lvol_store_uuid": "2224e149-e11d-4d12-bbfe-5a7468a63c3d", 00:07:34.589 "base_bdev": "aio_bdev", 00:07:34.589 "thin_provision": false, 00:07:34.589 "num_allocated_clusters": 38, 00:07:34.589 "snapshot": false, 00:07:34.589 "clone": false, 00:07:34.589 "esnap_clone": false 00:07:34.589 } 00:07:34.589 } 00:07:34.589 } 00:07:34.589 ] 00:07:34.589 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:34.589 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:34.589 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:34.847 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:34.847 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:34.847 10:50:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:35.107 10:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:35.107 10:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4a0a10fd-67fa-4128-a089-c182ce53dc7c 00:07:35.107 10:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2224e149-e11d-4d12-bbfe-5a7468a63c3d 00:07:35.366 10:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:35.635 10:50:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:36.220 00:07:36.220 real 0m17.635s 00:07:36.220 user 0m16.507s 00:07:36.220 sys 0m2.518s 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.220 ************************************ 00:07:36.220 END TEST lvs_grow_clean 00:07:36.220 ************************************ 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.220 ************************************ 00:07:36.220 START TEST lvs_grow_dirty 00:07:36.220 ************************************ 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:36.220 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.479 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:36.479 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:36.738 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:36.738 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:36.738 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:36.997 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:36.997 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:36.997 10:50:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba lvol 150 00:07:37.256 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:37.256 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:37.256 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:37.257 [2024-12-09 10:50:30.385643] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:37.257 [2024-12-09 10:50:30.385739] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:37.257 true 00:07:37.257 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:37.257 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:37.516 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:37.516 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:37.775 10:50:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:38.034 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:38.293 [2024-12-09 10:50:31.252409] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:38.293 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63770 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63770 /var/tmp/bdevperf.sock 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63770 ']' 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.552 10:50:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.552 [2024-12-09 10:50:31.548284] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:38.552 [2024-12-09 10:50:31.548372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63770 ] 00:07:38.552 [2024-12-09 10:50:31.701969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.811 [2024-12-09 10:50:31.757941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.811 [2024-12-09 10:50:31.799157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.379 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.379 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:39.379 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:39.638 Nvme0n1 00:07:39.638 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:39.898 [ 00:07:39.898 { 00:07:39.898 "name": "Nvme0n1", 00:07:39.898 "aliases": [ 00:07:39.898 "e679d838-7ffc-4341-b1d8-b1bbecb8878e" 00:07:39.898 ], 00:07:39.898 "product_name": "NVMe disk", 00:07:39.898 "block_size": 4096, 00:07:39.898 "num_blocks": 38912, 00:07:39.898 "uuid": "e679d838-7ffc-4341-b1d8-b1bbecb8878e", 00:07:39.898 "numa_id": -1, 00:07:39.898 "assigned_rate_limits": { 00:07:39.898 "rw_ios_per_sec": 0, 00:07:39.898 "rw_mbytes_per_sec": 0, 00:07:39.898 "r_mbytes_per_sec": 0, 00:07:39.898 "w_mbytes_per_sec": 0 00:07:39.898 }, 00:07:39.898 "claimed": false, 00:07:39.898 "zoned": false, 00:07:39.898 "supported_io_types": { 00:07:39.898 "read": true, 00:07:39.898 "write": true, 00:07:39.898 "unmap": true, 00:07:39.898 "flush": true, 00:07:39.898 "reset": true, 00:07:39.898 "nvme_admin": true, 00:07:39.898 "nvme_io": true, 00:07:39.898 "nvme_io_md": false, 00:07:39.898 "write_zeroes": true, 00:07:39.898 "zcopy": false, 00:07:39.898 "get_zone_info": false, 00:07:39.898 "zone_management": false, 00:07:39.898 "zone_append": false, 00:07:39.898 "compare": true, 00:07:39.898 "compare_and_write": true, 00:07:39.898 "abort": true, 00:07:39.898 "seek_hole": false, 00:07:39.898 "seek_data": false, 00:07:39.898 "copy": true, 00:07:39.898 "nvme_iov_md": false 00:07:39.898 }, 00:07:39.898 "memory_domains": [ 00:07:39.898 { 00:07:39.898 "dma_device_id": "system", 00:07:39.898 "dma_device_type": 1 00:07:39.898 } 00:07:39.898 ], 00:07:39.898 "driver_specific": { 00:07:39.898 "nvme": [ 00:07:39.898 { 00:07:39.898 "trid": { 00:07:39.898 "trtype": "TCP", 00:07:39.898 "adrfam": "IPv4", 00:07:39.898 "traddr": "10.0.0.3", 00:07:39.898 "trsvcid": "4420", 00:07:39.898 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:39.898 }, 00:07:39.898 "ctrlr_data": { 00:07:39.898 "cntlid": 1, 00:07:39.898 "vendor_id": "0x8086", 00:07:39.898 "model_number": "SPDK bdev Controller", 00:07:39.898 "serial_number": "SPDK0", 00:07:39.898 "firmware_revision": "25.01", 00:07:39.898 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.898 "oacs": { 00:07:39.898 "security": 0, 00:07:39.898 "format": 0, 00:07:39.898 "firmware": 0, 00:07:39.898 "ns_manage": 0 00:07:39.898 }, 00:07:39.898 "multi_ctrlr": true, 00:07:39.898 "ana_reporting": false 00:07:39.898 }, 00:07:39.898 "vs": { 00:07:39.898 "nvme_version": "1.3" 00:07:39.898 }, 00:07:39.898 "ns_data": { 00:07:39.898 "id": 1, 00:07:39.898 "can_share": true 00:07:39.898 } 00:07:39.898 } 00:07:39.898 ], 00:07:39.898 "mp_policy": "active_passive" 00:07:39.898 } 00:07:39.898 } 00:07:39.898 ] 00:07:39.898 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63799 00:07:39.898 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.898 10:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:40.157 Running I/O for 10 seconds... 00:07:41.096 Latency(us) 00:07:41.096 [2024-12-09T10:50:34.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.096 Nvme0n1 : 1.00 8732.00 34.11 0.00 0.00 0.00 0.00 0.00 00:07:41.096 [2024-12-09T10:50:34.275Z] =================================================================================================================== 00:07:41.096 [2024-12-09T10:50:34.275Z] Total : 8732.00 34.11 0.00 0.00 0.00 0.00 0.00 00:07:41.096 00:07:42.033 10:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:42.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.033 Nvme0n1 : 2.00 9065.00 35.41 0.00 0.00 0.00 0.00 0.00 00:07:42.033 [2024-12-09T10:50:35.212Z] =================================================================================================================== 00:07:42.033 [2024-12-09T10:50:35.213Z] Total : 9065.00 35.41 0.00 0.00 0.00 0.00 0.00 00:07:42.034 00:07:42.306 true 00:07:42.306 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:42.306 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:42.306 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:42.306 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:42.565 10:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63799 00:07:43.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.134 Nvme0n1 : 3.00 9133.67 35.68 0.00 0.00 0.00 0.00 0.00 00:07:43.134 [2024-12-09T10:50:36.313Z] =================================================================================================================== 00:07:43.134 [2024-12-09T10:50:36.313Z] Total : 9133.67 35.68 0.00 0.00 0.00 0.00 0.00 00:07:43.134 00:07:44.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.071 Nvme0n1 : 4.00 9104.25 35.56 0.00 0.00 0.00 0.00 0.00 00:07:44.071 [2024-12-09T10:50:37.251Z] =================================================================================================================== 00:07:44.072 [2024-12-09T10:50:37.251Z] Total : 9104.25 35.56 0.00 0.00 0.00 0.00 0.00 00:07:44.072 00:07:45.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.007 Nvme0n1 : 5.00 8959.80 35.00 0.00 0.00 0.00 0.00 0.00 00:07:45.007 [2024-12-09T10:50:38.186Z] =================================================================================================================== 00:07:45.007 [2024-12-09T10:50:38.186Z] Total : 8959.80 35.00 0.00 0.00 0.00 0.00 0.00 00:07:45.007 00:07:45.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.943 Nvme0n1 : 6.00 8884.67 34.71 0.00 0.00 0.00 0.00 0.00 00:07:45.943 [2024-12-09T10:50:39.122Z] =================================================================================================================== 00:07:45.943 [2024-12-09T10:50:39.122Z] Total : 8884.67 34.71 0.00 0.00 0.00 0.00 0.00 00:07:45.943 00:07:47.321 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.321 Nvme0n1 : 7.00 8831.00 34.50 0.00 0.00 0.00 0.00 0.00 00:07:47.321 [2024-12-09T10:50:40.500Z] =================================================================================================================== 00:07:47.321 [2024-12-09T10:50:40.500Z] Total : 8831.00 34.50 0.00 0.00 0.00 0.00 0.00 00:07:47.321 00:07:48.257 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.257 Nvme0n1 : 8.00 8165.75 31.90 0.00 0.00 0.00 0.00 0.00 00:07:48.257 [2024-12-09T10:50:41.436Z] =================================================================================================================== 00:07:48.257 [2024-12-09T10:50:41.436Z] Total : 8165.75 31.90 0.00 0.00 0.00 0.00 0.00 00:07:48.257 00:07:49.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.191 Nvme0n1 : 9.00 8175.67 31.94 0.00 0.00 0.00 0.00 0.00 00:07:49.191 [2024-12-09T10:50:42.370Z] =================================================================================================================== 00:07:49.191 [2024-12-09T10:50:42.370Z] Total : 8175.67 31.94 0.00 0.00 0.00 0.00 0.00 00:07:49.191 00:07:50.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.144 Nvme0n1 : 10.00 8246.80 32.21 0.00 0.00 0.00 0.00 0.00 00:07:50.144 [2024-12-09T10:50:43.323Z] =================================================================================================================== 00:07:50.144 [2024-12-09T10:50:43.323Z] Total : 8246.80 32.21 0.00 0.00 0.00 0.00 0.00 00:07:50.144 00:07:50.144 00:07:50.144 Latency(us) 00:07:50.144 [2024-12-09T10:50:43.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.144 Nvme0n1 : 10.01 8252.91 32.24 0.00 0.00 15504.08 2046.21 608082.50 00:07:50.144 [2024-12-09T10:50:43.323Z] =================================================================================================================== 00:07:50.144 [2024-12-09T10:50:43.323Z] Total : 8252.91 32.24 0.00 0.00 15504.08 2046.21 608082.50 00:07:50.144 { 00:07:50.144 "results": [ 00:07:50.144 { 00:07:50.144 "job": "Nvme0n1", 00:07:50.144 "core_mask": "0x2", 00:07:50.144 "workload": "randwrite", 00:07:50.144 "status": "finished", 00:07:50.144 "queue_depth": 128, 00:07:50.144 "io_size": 4096, 00:07:50.144 "runtime": 10.008101, 00:07:50.144 "iops": 8252.914314114136, 00:07:50.144 "mibps": 32.23794653950834, 00:07:50.144 "io_failed": 0, 00:07:50.144 "io_timeout": 0, 00:07:50.144 "avg_latency_us": 15504.080703443986, 00:07:50.144 "min_latency_us": 2046.2113537117905, 00:07:50.144 "max_latency_us": 608082.5013100436 00:07:50.144 } 00:07:50.144 ], 00:07:50.144 "core_count": 1 00:07:50.144 } 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63770 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63770 ']' 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63770 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63770 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.144 killing process with pid 63770 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63770' 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63770 00:07:50.144 Received shutdown signal, test time was about 10.000000 seconds 00:07:50.144 00:07:50.144 Latency(us) 00:07:50.144 [2024-12-09T10:50:43.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.144 [2024-12-09T10:50:43.323Z] =================================================================================================================== 00:07:50.144 [2024-12-09T10:50:43.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:50.144 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63770 00:07:50.402 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:50.660 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.917 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:50.917 10:50:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63433 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63433 00:07:51.176 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63433 Killed "${NVMF_APP[@]}" "$@" 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63932 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63932 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63932 ']' 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.176 10:50:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:51.176 [2024-12-09 10:50:44.210429] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:51.176 [2024-12-09 10:50:44.211435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.434 [2024-12-09 10:50:44.368038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.434 [2024-12-09 10:50:44.425010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.434 [2024-12-09 10:50:44.425059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.434 [2024-12-09 10:50:44.425067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.434 [2024-12-09 10:50:44.425073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.434 [2024-12-09 10:50:44.425079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.434 [2024-12-09 10:50:44.425371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.434 [2024-12-09 10:50:44.469454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.002 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.260 [2024-12-09 10:50:45.407061] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:52.260 [2024-12-09 10:50:45.407278] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:52.260 [2024-12-09 10:50:45.407413] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:52.519 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:52.778 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e679d838-7ffc-4341-b1d8-b1bbecb8878e -t 2000 00:07:52.778 [ 00:07:52.778 { 00:07:52.778 "name": "e679d838-7ffc-4341-b1d8-b1bbecb8878e", 00:07:52.778 "aliases": [ 00:07:52.778 "lvs/lvol" 00:07:52.778 ], 00:07:52.778 "product_name": "Logical Volume", 00:07:52.778 "block_size": 4096, 00:07:52.778 "num_blocks": 38912, 00:07:52.778 "uuid": "e679d838-7ffc-4341-b1d8-b1bbecb8878e", 00:07:52.778 "assigned_rate_limits": { 00:07:52.778 "rw_ios_per_sec": 0, 00:07:52.778 "rw_mbytes_per_sec": 0, 00:07:52.778 "r_mbytes_per_sec": 0, 00:07:52.778 "w_mbytes_per_sec": 0 00:07:52.778 }, 00:07:52.778 "claimed": false, 00:07:52.778 "zoned": false, 00:07:52.778 "supported_io_types": { 00:07:52.778 "read": true, 00:07:52.778 "write": true, 00:07:52.778 "unmap": true, 00:07:52.778 "flush": false, 00:07:52.778 "reset": true, 00:07:52.778 "nvme_admin": false, 00:07:52.778 "nvme_io": false, 00:07:52.778 "nvme_io_md": false, 00:07:52.778 "write_zeroes": true, 00:07:52.778 "zcopy": false, 00:07:52.778 "get_zone_info": false, 00:07:52.778 "zone_management": false, 00:07:52.778 "zone_append": false, 00:07:52.778 "compare": false, 00:07:52.778 "compare_and_write": false, 00:07:52.778 "abort": false, 00:07:52.778 "seek_hole": true, 00:07:52.778 "seek_data": true, 00:07:52.778 "copy": false, 00:07:52.778 "nvme_iov_md": false 00:07:52.778 }, 00:07:52.778 "driver_specific": { 00:07:52.778 "lvol": { 00:07:52.778 "lvol_store_uuid": "47cca552-9d8c-459d-8cd8-89a1f7c30aba", 00:07:52.778 "base_bdev": "aio_bdev", 00:07:52.778 "thin_provision": false, 00:07:52.778 "num_allocated_clusters": 38, 00:07:52.778 "snapshot": false, 00:07:52.778 "clone": false, 00:07:52.778 "esnap_clone": false 00:07:52.778 } 00:07:52.778 } 00:07:52.778 } 00:07:52.778 ] 00:07:52.778 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:52.778 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:52.778 10:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:53.036 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:53.036 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:53.036 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:53.294 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:53.294 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.552 [2024-12-09 10:50:46.650816] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:53.552 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:53.810 request: 00:07:53.810 { 00:07:53.810 "uuid": "47cca552-9d8c-459d-8cd8-89a1f7c30aba", 00:07:53.810 "method": "bdev_lvol_get_lvstores", 00:07:53.810 "req_id": 1 00:07:53.810 } 00:07:53.810 Got JSON-RPC error response 00:07:53.810 response: 00:07:53.810 { 00:07:53.810 "code": -19, 00:07:53.810 "message": "No such device" 00:07:53.810 } 00:07:53.810 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:53.810 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.810 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.810 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.810 10:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.068 aio_bdev 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.068 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:54.325 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e679d838-7ffc-4341-b1d8-b1bbecb8878e -t 2000 00:07:54.584 [ 00:07:54.584 { 00:07:54.584 "name": "e679d838-7ffc-4341-b1d8-b1bbecb8878e", 00:07:54.584 "aliases": [ 00:07:54.584 "lvs/lvol" 00:07:54.584 ], 00:07:54.584 "product_name": "Logical Volume", 00:07:54.584 "block_size": 4096, 00:07:54.584 "num_blocks": 38912, 00:07:54.584 "uuid": "e679d838-7ffc-4341-b1d8-b1bbecb8878e", 00:07:54.584 "assigned_rate_limits": { 00:07:54.584 "rw_ios_per_sec": 0, 00:07:54.584 "rw_mbytes_per_sec": 0, 00:07:54.584 "r_mbytes_per_sec": 0, 00:07:54.584 "w_mbytes_per_sec": 0 00:07:54.584 }, 00:07:54.584 "claimed": false, 00:07:54.584 "zoned": false, 00:07:54.584 "supported_io_types": { 00:07:54.584 "read": true, 00:07:54.584 "write": true, 00:07:54.584 "unmap": true, 00:07:54.584 "flush": false, 00:07:54.584 "reset": true, 00:07:54.584 "nvme_admin": false, 00:07:54.584 "nvme_io": false, 00:07:54.584 "nvme_io_md": false, 00:07:54.584 "write_zeroes": true, 00:07:54.584 "zcopy": false, 00:07:54.584 "get_zone_info": false, 00:07:54.584 "zone_management": false, 00:07:54.584 "zone_append": false, 00:07:54.584 "compare": false, 00:07:54.584 "compare_and_write": false, 00:07:54.584 "abort": false, 00:07:54.584 "seek_hole": true, 00:07:54.584 "seek_data": true, 00:07:54.584 "copy": false, 00:07:54.584 "nvme_iov_md": false 00:07:54.584 }, 00:07:54.584 "driver_specific": { 00:07:54.584 "lvol": { 00:07:54.584 "lvol_store_uuid": "47cca552-9d8c-459d-8cd8-89a1f7c30aba", 00:07:54.584 "base_bdev": "aio_bdev", 00:07:54.584 "thin_provision": false, 00:07:54.584 "num_allocated_clusters": 38, 00:07:54.584 "snapshot": false, 00:07:54.584 "clone": false, 00:07:54.584 "esnap_clone": false 00:07:54.584 } 00:07:54.584 } 00:07:54.584 } 00:07:54.584 ] 00:07:54.584 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:54.584 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:54.584 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:54.843 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:54.843 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:54.843 10:50:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:55.101 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:55.101 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e679d838-7ffc-4341-b1d8-b1bbecb8878e 00:07:55.359 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 47cca552-9d8c-459d-8cd8-89a1f7c30aba 00:07:55.617 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:55.875 10:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:56.137 ************************************ 00:07:56.137 END TEST lvs_grow_dirty 00:07:56.137 ************************************ 00:07:56.137 00:07:56.137 real 0m20.055s 00:07:56.137 user 0m41.272s 00:07:56.137 sys 0m7.013s 00:07:56.137 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.137 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:56.400 nvmf_trace.0 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.400 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.969 rmmod nvme_tcp 00:07:56.969 rmmod nvme_fabrics 00:07:56.969 rmmod nvme_keyring 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63932 ']' 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63932 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63932 ']' 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63932 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63932 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63932' 00:07:56.969 killing process with pid 63932 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63932 00:07:56.969 10:50:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63932 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.229 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:57.487 ************************************ 00:07:57.487 END TEST nvmf_lvs_grow 00:07:57.487 ************************************ 00:07:57.487 00:07:57.487 real 0m40.940s 00:07:57.487 user 1m4.541s 00:07:57.487 sys 0m10.812s 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.487 ************************************ 00:07:57.487 START TEST nvmf_bdev_io_wait 00:07:57.487 ************************************ 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:57.487 * Looking for test storage... 00:07:57.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:57.487 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:57.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.747 --rc genhtml_branch_coverage=1 00:07:57.747 --rc genhtml_function_coverage=1 00:07:57.747 --rc genhtml_legend=1 00:07:57.747 --rc geninfo_all_blocks=1 00:07:57.747 --rc geninfo_unexecuted_blocks=1 00:07:57.747 00:07:57.747 ' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:57.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.747 --rc genhtml_branch_coverage=1 00:07:57.747 --rc genhtml_function_coverage=1 00:07:57.747 --rc genhtml_legend=1 00:07:57.747 --rc geninfo_all_blocks=1 00:07:57.747 --rc geninfo_unexecuted_blocks=1 00:07:57.747 00:07:57.747 ' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:57.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.747 --rc genhtml_branch_coverage=1 00:07:57.747 --rc genhtml_function_coverage=1 00:07:57.747 --rc genhtml_legend=1 00:07:57.747 --rc geninfo_all_blocks=1 00:07:57.747 --rc geninfo_unexecuted_blocks=1 00:07:57.747 00:07:57.747 ' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:57.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.747 --rc genhtml_branch_coverage=1 00:07:57.747 --rc genhtml_function_coverage=1 00:07:57.747 --rc genhtml_legend=1 00:07:57.747 --rc geninfo_all_blocks=1 00:07:57.747 --rc geninfo_unexecuted_blocks=1 00:07:57.747 00:07:57.747 ' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.747 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:57.748 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:57.748 Cannot find device "nvmf_init_br" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:57.748 Cannot find device "nvmf_init_br2" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:57.748 Cannot find device "nvmf_tgt_br" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:57.748 Cannot find device "nvmf_tgt_br2" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:57.748 Cannot find device "nvmf_init_br" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:57.748 Cannot find device "nvmf_init_br2" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:57.748 Cannot find device "nvmf_tgt_br" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:57.748 Cannot find device "nvmf_tgt_br2" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:57.748 Cannot find device "nvmf_br" 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:57.748 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:58.008 Cannot find device "nvmf_init_if" 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:58.008 Cannot find device "nvmf_init_if2" 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:58.008 10:50:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:58.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:58.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:07:58.008 00:07:58.008 --- 10.0.0.3 ping statistics --- 00:07:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.008 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:58.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:58.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:07:58.008 00:07:58.008 --- 10.0.0.4 ping statistics --- 00:07:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.008 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:58.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:07:58.008 00:07:58.008 --- 10.0.0.1 ping statistics --- 00:07:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.008 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:58.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:07:58.008 00:07:58.008 --- 10.0.0.2 ping statistics --- 00:07:58.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.008 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64300 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64300 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64300 ']' 00:07:58.008 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.009 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.009 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.009 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.009 10:50:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.268 [2024-12-09 10:50:51.205632] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:58.268 [2024-12-09 10:50:51.205840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.268 [2024-12-09 10:50:51.336546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.268 [2024-12-09 10:50:51.389137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.268 [2024-12-09 10:50:51.389269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.268 [2024-12-09 10:50:51.389302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.268 [2024-12-09 10:50:51.389327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.268 [2024-12-09 10:50:51.389343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.268 [2024-12-09 10:50:51.390340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.268 [2024-12-09 10:50:51.390552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.268 [2024-12-09 10:50:51.390552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.268 [2024-12-09 10:50:51.390454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 [2024-12-09 10:50:52.238645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 [2024-12-09 10:50:52.254091] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 Malloc0 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:59.205 [2024-12-09 10:50:52.305425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64335 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64337 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.205 { 00:07:59.205 "params": { 00:07:59.205 "name": "Nvme$subsystem", 00:07:59.205 "trtype": "$TEST_TRANSPORT", 00:07:59.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.205 "adrfam": "ipv4", 00:07:59.205 "trsvcid": "$NVMF_PORT", 00:07:59.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.205 "hdgst": ${hdgst:-false}, 00:07:59.205 "ddgst": ${ddgst:-false} 00:07:59.205 }, 00:07:59.205 "method": "bdev_nvme_attach_controller" 00:07:59.205 } 00:07:59.205 EOF 00:07:59.205 )") 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64339 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.205 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.205 { 00:07:59.205 "params": { 00:07:59.205 "name": "Nvme$subsystem", 00:07:59.205 "trtype": "$TEST_TRANSPORT", 00:07:59.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.205 "adrfam": "ipv4", 00:07:59.205 "trsvcid": "$NVMF_PORT", 00:07:59.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.205 "hdgst": ${hdgst:-false}, 00:07:59.205 "ddgst": ${ddgst:-false} 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 } 00:07:59.206 EOF 00:07:59.206 )") 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64341 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.206 { 00:07:59.206 "params": { 00:07:59.206 "name": "Nvme$subsystem", 00:07:59.206 "trtype": "$TEST_TRANSPORT", 00:07:59.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.206 "adrfam": "ipv4", 00:07:59.206 "trsvcid": "$NVMF_PORT", 00:07:59.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.206 "hdgst": ${hdgst:-false}, 00:07:59.206 "ddgst": ${ddgst:-false} 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 } 00:07:59.206 EOF 00:07:59.206 )") 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:59.206 { 00:07:59.206 "params": { 00:07:59.206 "name": "Nvme$subsystem", 00:07:59.206 "trtype": "$TEST_TRANSPORT", 00:07:59.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.206 "adrfam": "ipv4", 00:07:59.206 "trsvcid": "$NVMF_PORT", 00:07:59.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.206 "hdgst": ${hdgst:-false}, 00:07:59.206 "ddgst": ${ddgst:-false} 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 } 00:07:59.206 EOF 00:07:59.206 )") 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.206 "params": { 00:07:59.206 "name": "Nvme1", 00:07:59.206 "trtype": "tcp", 00:07:59.206 "traddr": "10.0.0.3", 00:07:59.206 "adrfam": "ipv4", 00:07:59.206 "trsvcid": "4420", 00:07:59.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.206 "hdgst": false, 00:07:59.206 "ddgst": false 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 }' 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.206 "params": { 00:07:59.206 "name": "Nvme1", 00:07:59.206 "trtype": "tcp", 00:07:59.206 "traddr": "10.0.0.3", 00:07:59.206 "adrfam": "ipv4", 00:07:59.206 "trsvcid": "4420", 00:07:59.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.206 "hdgst": false, 00:07:59.206 "ddgst": false 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 }' 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.206 "params": { 00:07:59.206 "name": "Nvme1", 00:07:59.206 "trtype": "tcp", 00:07:59.206 "traddr": "10.0.0.3", 00:07:59.206 "adrfam": "ipv4", 00:07:59.206 "trsvcid": "4420", 00:07:59.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.206 "hdgst": false, 00:07:59.206 "ddgst": false 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 }' 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:59.206 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:59.206 "params": { 00:07:59.206 "name": "Nvme1", 00:07:59.206 "trtype": "tcp", 00:07:59.206 "traddr": "10.0.0.3", 00:07:59.206 "adrfam": "ipv4", 00:07:59.206 "trsvcid": "4420", 00:07:59.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:59.206 "hdgst": false, 00:07:59.206 "ddgst": false 00:07:59.206 }, 00:07:59.206 "method": "bdev_nvme_attach_controller" 00:07:59.206 }' 00:07:59.206 [2024-12-09 10:50:52.359409] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:59.206 [2024-12-09 10:50:52.360014] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:59.206 [2024-12-09 10:50:52.373120] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:59.206 [2024-12-09 10:50:52.373239] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:59.206 [2024-12-09 10:50:52.378149] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:59.206 [2024-12-09 10:50:52.378276] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:59.206 [2024-12-09 10:50:52.379521] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:07:59.465 [2024-12-09 10:50:52.382798] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:59.465 10:50:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64335 00:07:59.465 [2024-12-09 10:50:52.573379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.465 [2024-12-09 10:50:52.618612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.465 [2024-12-09 10:50:52.621183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:59.465 [2024-12-09 10:50:52.633436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.723 [2024-12-09 10:50:52.682532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:59.723 [2024-12-09 10:50:52.695090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.723 [2024-12-09 10:50:52.750121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.723 [2024-12-09 10:50:52.798078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:59.723 [2024-12-09 10:50:52.810455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.723 Running I/O for 1 seconds... 00:07:59.723 Running I/O for 1 seconds... 00:07:59.723 [2024-12-09 10:50:52.898310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.982 Running I/O for 1 seconds... 00:07:59.982 [2024-12-09 10:50:52.945834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:59.982 [2024-12-09 10:50:52.958202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.982 Running I/O for 1 seconds... 00:08:00.918 185928.00 IOPS, 726.28 MiB/s 00:08:00.918 Latency(us) 00:08:00.918 [2024-12-09T10:50:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.918 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:00.918 Nvme1n1 : 1.00 185581.54 724.93 0.00 0.00 686.20 313.01 1845.88 00:08:00.918 [2024-12-09T10:50:54.097Z] =================================================================================================================== 00:08:00.918 [2024-12-09T10:50:54.097Z] Total : 185581.54 724.93 0.00 0.00 686.20 313.01 1845.88 00:08:00.918 8329.00 IOPS, 32.54 MiB/s 00:08:00.918 Latency(us) 00:08:00.918 [2024-12-09T10:50:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.918 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:00.918 Nvme1n1 : 1.01 8401.07 32.82 0.00 0.00 15175.65 7440.77 20948.63 00:08:00.918 [2024-12-09T10:50:54.097Z] =================================================================================================================== 00:08:00.918 [2024-12-09T10:50:54.097Z] Total : 8401.07 32.82 0.00 0.00 15175.65 7440.77 20948.63 00:08:00.918 4088.00 IOPS, 15.97 MiB/s 00:08:00.918 Latency(us) 00:08:00.918 [2024-12-09T10:50:54.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.918 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:00.918 Nvme1n1 : 1.01 4164.86 16.27 0.00 0.00 30527.63 12019.70 43499.88 00:08:00.918 [2024-12-09T10:50:54.097Z] =================================================================================================================== 00:08:00.918 [2024-12-09T10:50:54.097Z] Total : 4164.86 16.27 0.00 0.00 30527.63 12019.70 43499.88 00:08:01.177 4901.00 IOPS, 19.14 MiB/s 00:08:01.177 Latency(us) 00:08:01.177 [2024-12-09T10:50:54.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.177 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:01.177 Nvme1n1 : 1.01 4985.03 19.47 0.00 0.00 25535.00 2203.61 42126.20 00:08:01.177 [2024-12-09T10:50:54.356Z] =================================================================================================================== 00:08:01.177 [2024-12-09T10:50:54.356Z] Total : 4985.03 19.47 0.00 0.00 25535.00 2203.61 42126.20 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64337 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64339 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64341 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.177 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.177 rmmod nvme_tcp 00:08:01.437 rmmod nvme_fabrics 00:08:01.437 rmmod nvme_keyring 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64300 ']' 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64300 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64300 ']' 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64300 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64300 00:08:01.437 killing process with pid 64300 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64300' 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64300 00:08:01.437 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64300 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:01.697 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:01.997 00:08:01.997 real 0m4.472s 00:08:01.997 user 0m18.117s 00:08:01.997 sys 0m2.119s 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.997 10:50:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:01.997 ************************************ 00:08:01.997 END TEST nvmf_bdev_io_wait 00:08:01.997 ************************************ 00:08:01.997 10:50:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:01.997 10:50:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.997 10:50:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.997 10:50:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.997 ************************************ 00:08:01.997 START TEST nvmf_queue_depth 00:08:01.997 ************************************ 00:08:01.997 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:02.256 * Looking for test storage... 00:08:02.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.256 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.257 --rc genhtml_branch_coverage=1 00:08:02.257 --rc genhtml_function_coverage=1 00:08:02.257 --rc genhtml_legend=1 00:08:02.257 --rc geninfo_all_blocks=1 00:08:02.257 --rc geninfo_unexecuted_blocks=1 00:08:02.257 00:08:02.257 ' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.257 --rc genhtml_branch_coverage=1 00:08:02.257 --rc genhtml_function_coverage=1 00:08:02.257 --rc genhtml_legend=1 00:08:02.257 --rc geninfo_all_blocks=1 00:08:02.257 --rc geninfo_unexecuted_blocks=1 00:08:02.257 00:08:02.257 ' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.257 --rc genhtml_branch_coverage=1 00:08:02.257 --rc genhtml_function_coverage=1 00:08:02.257 --rc genhtml_legend=1 00:08:02.257 --rc geninfo_all_blocks=1 00:08:02.257 --rc geninfo_unexecuted_blocks=1 00:08:02.257 00:08:02.257 ' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.257 --rc genhtml_branch_coverage=1 00:08:02.257 --rc genhtml_function_coverage=1 00:08:02.257 --rc genhtml_legend=1 00:08:02.257 --rc geninfo_all_blocks=1 00:08:02.257 --rc geninfo_unexecuted_blocks=1 00:08:02.257 00:08:02.257 ' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.257 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:02.258 Cannot find device "nvmf_init_br" 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:02.258 Cannot find device "nvmf_init_br2" 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:02.258 Cannot find device "nvmf_tgt_br" 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:02.258 Cannot find device "nvmf_tgt_br2" 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:02.258 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:02.517 Cannot find device "nvmf_init_br" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:02.517 Cannot find device "nvmf_init_br2" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:02.517 Cannot find device "nvmf_tgt_br" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:02.517 Cannot find device "nvmf_tgt_br2" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:02.517 Cannot find device "nvmf_br" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:02.517 Cannot find device "nvmf_init_if" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:02.517 Cannot find device "nvmf_init_if2" 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:02.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:02.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:02.517 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:02.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:02.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:08:02.776 00:08:02.776 --- 10.0.0.3 ping statistics --- 00:08:02.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.776 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:02.776 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:02.776 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:08:02.776 00:08:02.776 --- 10.0.0.4 ping statistics --- 00:08:02.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.776 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:02.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:08:02.776 00:08:02.776 --- 10.0.0.1 ping statistics --- 00:08:02.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.776 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:02.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:02.776 00:08:02.776 --- 10.0.0.2 ping statistics --- 00:08:02.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.776 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64640 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64640 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64640 ']' 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.776 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.777 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.777 10:50:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:02.777 [2024-12-09 10:50:55.802448] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:08:02.777 [2024-12-09 10:50:55.802519] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.035 [2024-12-09 10:50:55.959413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.035 [2024-12-09 10:50:56.016131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.036 [2024-12-09 10:50:56.016183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.036 [2024-12-09 10:50:56.016190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.036 [2024-12-09 10:50:56.016196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.036 [2024-12-09 10:50:56.016201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.036 [2024-12-09 10:50:56.016492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.036 [2024-12-09 10:50:56.059301] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.603 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.603 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:03.603 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.603 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.603 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 [2024-12-09 10:50:56.789960] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 Malloc0 00:08:03.863 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.864 [2024-12-09 10:50:56.837218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64672 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64672 /var/tmp/bdevperf.sock 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64672 ']' 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.864 10:50:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:03.864 [2024-12-09 10:50:56.897567] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:08:03.864 [2024-12-09 10:50:56.897655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64672 ] 00:08:03.864 [2024-12-09 10:50:57.032529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.123 [2024-12-09 10:50:57.088188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.123 [2024-12-09 10:50:57.132678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:04.123 NVMe0n1 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.123 10:50:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:04.381 Running I/O for 10 seconds... 00:08:06.254 7955.00 IOPS, 31.07 MiB/s [2024-12-09T10:51:00.420Z] 9216.00 IOPS, 36.00 MiB/s [2024-12-09T10:51:01.818Z] 9583.33 IOPS, 37.43 MiB/s [2024-12-09T10:51:02.385Z] 9795.25 IOPS, 38.26 MiB/s [2024-12-09T10:51:03.764Z] 9852.40 IOPS, 38.49 MiB/s [2024-12-09T10:51:04.702Z] 9911.67 IOPS, 38.72 MiB/s [2024-12-09T10:51:05.645Z] 9984.43 IOPS, 39.00 MiB/s [2024-12-09T10:51:06.598Z] 10001.62 IOPS, 39.07 MiB/s [2024-12-09T10:51:07.536Z] 10099.56 IOPS, 39.45 MiB/s [2024-12-09T10:51:07.536Z] 10143.20 IOPS, 39.62 MiB/s 00:08:14.357 Latency(us) 00:08:14.357 [2024-12-09T10:51:07.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.357 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:14.357 Verification LBA range: start 0x0 length 0x4000 00:08:14.357 NVMe0n1 : 10.08 10161.33 39.69 0.00 0.00 100390.14 22780.20 85168.18 00:08:14.357 [2024-12-09T10:51:07.536Z] =================================================================================================================== 00:08:14.357 [2024-12-09T10:51:07.537Z] Total : 10161.33 39.69 0.00 0.00 100390.14 22780.20 85168.18 00:08:14.358 { 00:08:14.358 "results": [ 00:08:14.358 { 00:08:14.358 "job": "NVMe0n1", 00:08:14.358 "core_mask": "0x1", 00:08:14.358 "workload": "verify", 00:08:14.358 "status": "finished", 00:08:14.358 "verify_range": { 00:08:14.358 "start": 0, 00:08:14.358 "length": 16384 00:08:14.358 }, 00:08:14.358 "queue_depth": 1024, 00:08:14.358 "io_size": 4096, 00:08:14.358 "runtime": 10.081356, 00:08:14.358 "iops": 10161.331471679008, 00:08:14.358 "mibps": 39.692701061246126, 00:08:14.358 "io_failed": 0, 00:08:14.358 "io_timeout": 0, 00:08:14.358 "avg_latency_us": 100390.13622967283, 00:08:14.358 "min_latency_us": 22780.199126637555, 00:08:14.358 "max_latency_us": 85168.18165938865 00:08:14.358 } 00:08:14.358 ], 00:08:14.358 "core_count": 1 00:08:14.358 } 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64672 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64672 ']' 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64672 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64672 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.358 killing process with pid 64672 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64672' 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64672 00:08:14.358 Received shutdown signal, test time was about 10.000000 seconds 00:08:14.358 00:08:14.358 Latency(us) 00:08:14.358 [2024-12-09T10:51:07.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.358 [2024-12-09T10:51:07.537Z] =================================================================================================================== 00:08:14.358 [2024-12-09T10:51:07.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:14.358 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64672 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.617 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.617 rmmod nvme_tcp 00:08:14.617 rmmod nvme_fabrics 00:08:14.877 rmmod nvme_keyring 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64640 ']' 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64640 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64640 ']' 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64640 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64640 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.877 killing process with pid 64640 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64640' 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64640 00:08:14.877 10:51:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64640 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:15.137 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:15.397 00:08:15.397 real 0m13.340s 00:08:15.397 user 0m22.315s 00:08:15.397 sys 0m2.230s 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:15.397 ************************************ 00:08:15.397 END TEST nvmf_queue_depth 00:08:15.397 ************************************ 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.397 ************************************ 00:08:15.397 START TEST nvmf_target_multipath 00:08:15.397 ************************************ 00:08:15.397 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:15.397 * Looking for test storage... 00:08:15.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.658 --rc genhtml_branch_coverage=1 00:08:15.658 --rc genhtml_function_coverage=1 00:08:15.658 --rc genhtml_legend=1 00:08:15.658 --rc geninfo_all_blocks=1 00:08:15.658 --rc geninfo_unexecuted_blocks=1 00:08:15.658 00:08:15.658 ' 00:08:15.658 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.658 --rc genhtml_branch_coverage=1 00:08:15.658 --rc genhtml_function_coverage=1 00:08:15.658 --rc genhtml_legend=1 00:08:15.658 --rc geninfo_all_blocks=1 00:08:15.658 --rc geninfo_unexecuted_blocks=1 00:08:15.658 00:08:15.659 ' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.659 --rc genhtml_branch_coverage=1 00:08:15.659 --rc genhtml_function_coverage=1 00:08:15.659 --rc genhtml_legend=1 00:08:15.659 --rc geninfo_all_blocks=1 00:08:15.659 --rc geninfo_unexecuted_blocks=1 00:08:15.659 00:08:15.659 ' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.659 --rc genhtml_branch_coverage=1 00:08:15.659 --rc genhtml_function_coverage=1 00:08:15.659 --rc genhtml_legend=1 00:08:15.659 --rc geninfo_all_blocks=1 00:08:15.659 --rc geninfo_unexecuted_blocks=1 00:08:15.659 00:08:15.659 ' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:15.659 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.659 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:15.660 Cannot find device "nvmf_init_br" 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:15.660 Cannot find device "nvmf_init_br2" 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:15.660 Cannot find device "nvmf_tgt_br" 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.660 Cannot find device "nvmf_tgt_br2" 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:15.660 Cannot find device "nvmf_init_br" 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:15.660 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:15.919 Cannot find device "nvmf_init_br2" 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:15.919 Cannot find device "nvmf_tgt_br" 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:15.919 Cannot find device "nvmf_tgt_br2" 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:15.919 Cannot find device "nvmf_br" 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:15.919 Cannot find device "nvmf_init_if" 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:15.919 Cannot find device "nvmf_init_if2" 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.919 10:51:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:15.919 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:16.178 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:16.178 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:08:16.178 00:08:16.178 --- 10.0.0.3 ping statistics --- 00:08:16.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.178 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:16.178 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:16.178 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:08:16.178 00:08:16.178 --- 10.0.0.4 ping statistics --- 00:08:16.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.178 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:16.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:08:16.178 00:08:16.178 --- 10.0.0.1 ping statistics --- 00:08:16.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.178 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:16.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:08:16.178 00:08:16.178 --- 10.0.0.2 ping statistics --- 00:08:16.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.178 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65039 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65039 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65039 ']' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:16.178 10:51:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.178 [2024-12-09 10:51:09.186285] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:08:16.178 [2024-12-09 10:51:09.186347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.178 [2024-12-09 10:51:09.337657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.437 [2024-12-09 10:51:09.397540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.437 [2024-12-09 10:51:09.397610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.437 [2024-12-09 10:51:09.397619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.437 [2024-12-09 10:51:09.397626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.437 [2024-12-09 10:51:09.397632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.437 [2024-12-09 10:51:09.398931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.437 [2024-12-09 10:51:09.398712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.437 [2024-12-09 10:51:09.398931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.437 [2024-12-09 10:51:09.398851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.437 [2024-12-09 10:51:09.440948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.004 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.264 [2024-12-09 10:51:10.323090] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.264 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:17.523 Malloc0 00:08:17.523 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:17.782 10:51:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.041 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:18.300 [2024-12-09 10:51:11.235057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:18.300 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:18.300 [2024-12-09 10:51:11.451045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:18.300 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:18.560 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:18.819 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.819 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:18.819 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.819 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:18.819 10:51:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:20.728 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:20.728 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:20.728 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.728 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65128 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:20.729 10:51:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:20.729 [global] 00:08:20.729 thread=1 00:08:20.729 invalidate=1 00:08:20.729 rw=randrw 00:08:20.729 time_based=1 00:08:20.729 runtime=6 00:08:20.729 ioengine=libaio 00:08:20.729 direct=1 00:08:20.729 bs=4096 00:08:20.729 iodepth=128 00:08:20.729 norandommap=0 00:08:20.729 numjobs=1 00:08:20.729 00:08:20.729 verify_dump=1 00:08:20.729 verify_backlog=512 00:08:20.729 verify_state_save=0 00:08:20.729 do_verify=1 00:08:20.729 verify=crc32c-intel 00:08:20.729 [job0] 00:08:20.729 filename=/dev/nvme0n1 00:08:20.729 Could not set queue depth (nvme0n1) 00:08:20.988 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:20.988 fio-3.35 00:08:20.988 Starting 1 thread 00:08:21.925 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:21.925 10:51:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:22.183 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:22.441 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:22.702 10:51:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65128 00:08:27.979 00:08:27.979 job0: (groupid=0, jobs=1): err= 0: pid=65150: Mon Dec 9 10:51:20 2024 00:08:27.979 read: IOPS=12.5k, BW=48.7MiB/s (51.0MB/s)(292MiB/6002msec) 00:08:27.979 slat (usec): min=4, max=4995, avg=44.68, stdev=163.43 00:08:27.979 clat (usec): min=481, max=23497, avg=7036.82, stdev=1383.94 00:08:27.979 lat (usec): min=515, max=24109, avg=7081.50, stdev=1389.78 00:08:27.979 clat percentiles (usec): 00:08:27.979 | 1.00th=[ 4178], 5.00th=[ 5145], 10.00th=[ 5800], 20.00th=[ 6325], 00:08:27.979 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 7046], 00:08:27.979 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8225], 95.00th=[ 9896], 00:08:27.979 | 99.00th=[11469], 99.50th=[12256], 99.90th=[19006], 99.95th=[20055], 00:08:27.979 | 99.99th=[22152] 00:08:27.979 bw ( KiB/s): min=13392, max=35968, per=53.49%, avg=26664.73, stdev=7210.58, samples=11 00:08:27.979 iops : min= 3348, max= 8992, avg=6666.18, stdev=1802.65, samples=11 00:08:27.979 write: IOPS=7405, BW=28.9MiB/s (30.3MB/s)(150MiB/5185msec); 0 zone resets 00:08:27.979 slat (usec): min=9, max=7321, avg=58.11, stdev=114.99 00:08:27.979 clat (usec): min=439, max=22572, avg=6101.32, stdev=1275.54 00:08:27.979 lat (usec): min=540, max=22613, avg=6159.43, stdev=1281.86 00:08:27.979 clat percentiles (usec): 00:08:27.979 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5407], 00:08:27.979 | 30.00th=[ 5735], 40.00th=[ 5932], 50.00th=[ 6128], 60.00th=[ 6259], 00:08:27.979 | 70.00th=[ 6456], 80.00th=[ 6718], 90.00th=[ 7046], 95.00th=[ 7439], 00:08:27.979 | 99.00th=[10159], 99.50th=[11338], 99.90th=[19006], 99.95th=[20579], 00:08:27.979 | 99.99th=[22152] 00:08:27.979 bw ( KiB/s): min=13944, max=35230, per=89.84%, avg=26614.36, stdev=6746.15, samples=11 00:08:27.979 iops : min= 3486, max= 8807, avg=6653.55, stdev=1686.47, samples=11 00:08:27.979 lat (usec) : 500=0.01%, 750=0.01% 00:08:27.979 lat (msec) : 2=0.09%, 4=1.35%, 10=95.03%, 20=3.46%, 50=0.06% 00:08:27.979 cpu : usr=6.25%, sys=31.31%, ctx=6920, majf=0, minf=102 00:08:27.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:27.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:27.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:27.979 issued rwts: total=74801,38400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:27.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:27.979 00:08:27.979 Run status group 0 (all jobs): 00:08:27.979 READ: bw=48.7MiB/s (51.0MB/s), 48.7MiB/s-48.7MiB/s (51.0MB/s-51.0MB/s), io=292MiB (306MB), run=6002-6002msec 00:08:27.979 WRITE: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=150MiB (157MB), run=5185-5185msec 00:08:27.979 00:08:27.979 Disk stats (read/write): 00:08:27.979 nvme0n1: ios=73113/38400, merge=0/0, ticks=477401/210051, in_queue=687452, util=98.55% 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65234 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:27.979 10:51:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:27.979 [global] 00:08:27.979 thread=1 00:08:27.979 invalidate=1 00:08:27.979 rw=randrw 00:08:27.979 time_based=1 00:08:27.979 runtime=6 00:08:27.979 ioengine=libaio 00:08:27.979 direct=1 00:08:27.979 bs=4096 00:08:27.979 iodepth=128 00:08:27.979 norandommap=0 00:08:27.979 numjobs=1 00:08:27.979 00:08:27.979 verify_dump=1 00:08:27.979 verify_backlog=512 00:08:27.979 verify_state_save=0 00:08:27.979 do_verify=1 00:08:27.979 verify=crc32c-intel 00:08:27.979 [job0] 00:08:27.979 filename=/dev/nvme0n1 00:08:27.979 Could not set queue depth (nvme0n1) 00:08:27.979 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:27.979 fio-3.35 00:08:27.979 Starting 1 thread 00:08:28.548 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:28.807 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:29.065 10:51:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:29.065 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:29.324 10:51:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65234 00:08:34.597 00:08:34.597 job0: (groupid=0, jobs=1): err= 0: pid=65255: Mon Dec 9 10:51:26 2024 00:08:34.597 read: IOPS=11.3k, BW=44.1MiB/s (46.3MB/s)(265MiB/6006msec) 00:08:34.597 slat (usec): min=4, max=5661, avg=43.10, stdev=157.18 00:08:34.597 clat (usec): min=377, max=21794, avg=7742.12, stdev=2407.78 00:08:34.597 lat (usec): min=391, max=21809, avg=7785.22, stdev=2411.29 00:08:34.597 clat percentiles (usec): 00:08:34.597 | 1.00th=[ 1696], 5.00th=[ 4047], 10.00th=[ 5276], 20.00th=[ 6652], 00:08:34.597 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7898], 00:08:34.597 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 9896], 95.00th=[12125], 00:08:34.597 | 99.00th=[16712], 99.50th=[17957], 99.90th=[19792], 99.95th=[20317], 00:08:34.597 | 99.99th=[21365] 00:08:34.597 bw ( KiB/s): min= 9892, max=34488, per=52.11%, avg=23540.73, stdev=7475.63, samples=11 00:08:34.597 iops : min= 2473, max= 8622, avg=5885.18, stdev=1868.91, samples=11 00:08:34.597 write: IOPS=7064, BW=27.6MiB/s (28.9MB/s)(141MiB/5108msec); 0 zone resets 00:08:34.597 slat (usec): min=15, max=3303, avg=57.13, stdev=106.89 00:08:34.597 clat (usec): min=302, max=19077, avg=6560.52, stdev=2063.57 00:08:34.597 lat (usec): min=348, max=19115, avg=6617.65, stdev=2068.18 00:08:34.597 clat percentiles (usec): 00:08:34.597 | 1.00th=[ 1532], 5.00th=[ 3294], 10.00th=[ 4113], 20.00th=[ 5080], 00:08:34.597 | 30.00th=[ 5997], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6980], 00:08:34.597 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 8029], 95.00th=[ 9765], 00:08:34.597 | 99.00th=[13960], 99.50th=[14746], 99.90th=[17957], 99.95th=[18744], 00:08:34.597 | 99.99th=[19006] 00:08:34.597 bw ( KiB/s): min=10067, max=34024, per=83.47%, avg=23589.36, stdev=7239.46, samples=11 00:08:34.597 iops : min= 2516, max= 8506, avg=5897.27, stdev=1810.00, samples=11 00:08:34.597 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.10% 00:08:34.597 lat (msec) : 2=1.47%, 4=4.62%, 10=85.75%, 20=7.90%, 50=0.06% 00:08:34.598 cpu : usr=6.26%, sys=30.29%, ctx=6496, majf=0, minf=145 00:08:34.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:34.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.598 issued rwts: total=67833,36087,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.598 00:08:34.598 Run status group 0 (all jobs): 00:08:34.598 READ: bw=44.1MiB/s (46.3MB/s), 44.1MiB/s-44.1MiB/s (46.3MB/s-46.3MB/s), io=265MiB (278MB), run=6006-6006msec 00:08:34.598 WRITE: bw=27.6MiB/s (28.9MB/s), 27.6MiB/s-27.6MiB/s (28.9MB/s-28.9MB/s), io=141MiB (148MB), run=5108-5108msec 00:08:34.598 00:08:34.598 Disk stats (read/write): 00:08:34.598 nvme0n1: ios=66848/35333, merge=0/0, ticks=487682/212563, in_queue=700245, util=98.71% 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:08:34.598 10:51:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.598 rmmod nvme_tcp 00:08:34.598 rmmod nvme_fabrics 00:08:34.598 rmmod nvme_keyring 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65039 ']' 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65039 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65039 ']' 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65039 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65039 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65039' 00:08:34.598 killing process with pid 65039 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65039 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65039 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:34.598 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:34.857 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.857 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.857 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:34.858 00:08:34.858 real 0m19.405s 00:08:34.858 user 1m12.483s 00:08:34.858 sys 0m9.281s 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:34.858 ************************************ 00:08:34.858 END TEST nvmf_target_multipath 00:08:34.858 ************************************ 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.858 ************************************ 00:08:34.858 START TEST nvmf_zcopy 00:08:34.858 ************************************ 00:08:34.858 10:51:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:35.117 * Looking for test storage... 00:08:35.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:35.117 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:35.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.118 --rc genhtml_branch_coverage=1 00:08:35.118 --rc genhtml_function_coverage=1 00:08:35.118 --rc genhtml_legend=1 00:08:35.118 --rc geninfo_all_blocks=1 00:08:35.118 --rc geninfo_unexecuted_blocks=1 00:08:35.118 00:08:35.118 ' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.118 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:35.119 Cannot find device "nvmf_init_br" 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:35.119 Cannot find device "nvmf_init_br2" 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:35.119 Cannot find device "nvmf_tgt_br" 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.119 Cannot find device "nvmf_tgt_br2" 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:35.119 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:35.377 Cannot find device "nvmf_init_br" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:35.377 Cannot find device "nvmf_init_br2" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:35.377 Cannot find device "nvmf_tgt_br" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:35.377 Cannot find device "nvmf_tgt_br2" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:35.377 Cannot find device "nvmf_br" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:35.377 Cannot find device "nvmf_init_if" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:35.377 Cannot find device "nvmf_init_if2" 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.377 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:35.377 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:35.378 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:35.378 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:35.378 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:35.378 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:35.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:08:35.636 00:08:35.636 --- 10.0.0.3 ping statistics --- 00:08:35.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.636 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:35.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:35.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.115 ms 00:08:35.636 00:08:35.636 --- 10.0.0.4 ping statistics --- 00:08:35.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.636 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:35.636 00:08:35.636 --- 10.0.0.1 ping statistics --- 00:08:35.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.636 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:35.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:08:35.636 00:08:35.636 --- 10.0.0.2 ping statistics --- 00:08:35.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.636 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65556 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65556 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65556 ']' 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.636 10:51:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.636 [2024-12-09 10:51:28.774456] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:08:35.636 [2024-12-09 10:51:28.774535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.895 [2024-12-09 10:51:28.929471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.895 [2024-12-09 10:51:28.979812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.895 [2024-12-09 10:51:28.979865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.895 [2024-12-09 10:51:28.979872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.895 [2024-12-09 10:51:28.979877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.895 [2024-12-09 10:51:28.979881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.895 [2024-12-09 10:51:28.980185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.895 [2024-12-09 10:51:29.021591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 [2024-12-09 10:51:29.719116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 [2024-12-09 10:51:29.743211] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 malloc0 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:36.832 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:36.833 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:36.833 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:36.833 { 00:08:36.833 "params": { 00:08:36.833 "name": "Nvme$subsystem", 00:08:36.833 "trtype": "$TEST_TRANSPORT", 00:08:36.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.833 "adrfam": "ipv4", 00:08:36.833 "trsvcid": "$NVMF_PORT", 00:08:36.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.833 "hdgst": ${hdgst:-false}, 00:08:36.833 "ddgst": ${ddgst:-false} 00:08:36.833 }, 00:08:36.833 "method": "bdev_nvme_attach_controller" 00:08:36.833 } 00:08:36.833 EOF 00:08:36.833 )") 00:08:36.833 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:36.833 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:36.833 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:36.833 10:51:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:36.833 "params": { 00:08:36.833 "name": "Nvme1", 00:08:36.833 "trtype": "tcp", 00:08:36.833 "traddr": "10.0.0.3", 00:08:36.833 "adrfam": "ipv4", 00:08:36.833 "trsvcid": "4420", 00:08:36.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.833 "hdgst": false, 00:08:36.833 "ddgst": false 00:08:36.833 }, 00:08:36.833 "method": "bdev_nvme_attach_controller" 00:08:36.833 }' 00:08:36.833 [2024-12-09 10:51:29.842924] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:08:36.833 [2024-12-09 10:51:29.843007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65589 ] 00:08:36.833 [2024-12-09 10:51:29.980527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.091 [2024-12-09 10:51:30.034919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.091 [2024-12-09 10:51:30.084282] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.091 Running I/O for 10 seconds... 00:08:39.407 8417.00 IOPS, 65.76 MiB/s [2024-12-09T10:51:33.524Z] 8170.00 IOPS, 63.83 MiB/s [2024-12-09T10:51:34.461Z] 8203.67 IOPS, 64.09 MiB/s [2024-12-09T10:51:35.401Z] 8171.00 IOPS, 63.84 MiB/s [2024-12-09T10:51:36.336Z] 8250.20 IOPS, 64.45 MiB/s [2024-12-09T10:51:37.272Z] 8293.17 IOPS, 64.79 MiB/s [2024-12-09T10:51:38.209Z] 8309.29 IOPS, 64.92 MiB/s [2024-12-09T10:51:39.588Z] 8292.75 IOPS, 64.79 MiB/s [2024-12-09T10:51:40.526Z] 8270.78 IOPS, 64.62 MiB/s [2024-12-09T10:51:40.526Z] 8250.70 IOPS, 64.46 MiB/s 00:08:47.347 Latency(us) 00:08:47.347 [2024-12-09T10:51:40.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.347 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:47.347 Verification LBA range: start 0x0 length 0x1000 00:08:47.347 Nvme1n1 : 10.01 8251.24 64.46 0.00 0.00 15467.50 908.63 24611.77 00:08:47.347 [2024-12-09T10:51:40.526Z] =================================================================================================================== 00:08:47.347 [2024-12-09T10:51:40.526Z] Total : 8251.24 64.46 0.00 0.00 15467.50 908.63 24611.77 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65706 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:47.347 { 00:08:47.347 "params": { 00:08:47.347 "name": "Nvme$subsystem", 00:08:47.347 "trtype": "$TEST_TRANSPORT", 00:08:47.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.347 "adrfam": "ipv4", 00:08:47.347 "trsvcid": "$NVMF_PORT", 00:08:47.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.347 "hdgst": ${hdgst:-false}, 00:08:47.347 "ddgst": ${ddgst:-false} 00:08:47.347 }, 00:08:47.347 "method": "bdev_nvme_attach_controller" 00:08:47.347 } 00:08:47.347 EOF 00:08:47.347 )") 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:47.347 [2024-12-09 10:51:40.405411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.405452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:47.347 10:51:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:47.347 "params": { 00:08:47.347 "name": "Nvme1", 00:08:47.347 "trtype": "tcp", 00:08:47.347 "traddr": "10.0.0.3", 00:08:47.347 "adrfam": "ipv4", 00:08:47.347 "trsvcid": "4420", 00:08:47.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:47.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:47.347 "hdgst": false, 00:08:47.347 "ddgst": false 00:08:47.347 }, 00:08:47.347 "method": "bdev_nvme_attach_controller" 00:08:47.347 }' 00:08:47.347 [2024-12-09 10:51:40.417349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.417376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.429321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.429341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.441300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.441319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.453278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.453297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.454175] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:08:47.347 [2024-12-09 10:51:40.454238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65706 ] 00:08:47.347 [2024-12-09 10:51:40.469256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.469281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.481226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.481246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.493209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.493229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.505216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.505239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.347 [2024-12-09 10:51:40.517170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.347 [2024-12-09 10:51:40.517190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.529153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.529177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.541130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.541150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.553107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.553127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.565087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.565106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.577062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.577081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.589045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.589063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.601024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.601044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.607520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.607 [2024-12-09 10:51:40.613020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.613066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.624994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.625022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.636963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.636987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.648939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.648960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.657913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.607 [2024-12-09 10:51:40.660917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.660937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.672907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.672933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.684888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.684918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.696866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.696894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.707761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.607 [2024-12-09 10:51:40.708847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.708873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.720846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.720881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.732824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.732857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.744795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.744819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.756799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.756829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.768781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.768807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.607 [2024-12-09 10:51:40.780786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.607 [2024-12-09 10:51:40.780818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.792773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.792805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.804742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.804780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.816725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.816767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 Running I/O for 5 seconds... 00:08:47.867 [2024-12-09 10:51:40.828714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.828738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.844299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.844334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.860214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.860246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.876540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.876575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.888011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.888044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.902805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.902851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.914252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.914304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.929360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.929417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.939836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.939888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.954662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.954715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.970137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.970189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:40.985361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:40.985413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:41.001582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:41.001626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:41.018057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:41.018106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:47.867 [2024-12-09 10:51:41.029535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:47.867 [2024-12-09 10:51:41.029586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.045182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.045221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.061936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.061972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.077505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.077538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.091694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.091727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.105989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.106017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.120744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.120789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.138005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.138062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.155172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.155227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.170692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.170755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.185144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.185197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.200589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.200643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.215346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.215398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.226057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.226105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.240918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.240975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.256860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.256917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.270952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.271003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.285476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.285514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.127 [2024-12-09 10:51:41.297149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.127 [2024-12-09 10:51:41.297198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.313437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.313483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.329846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.329901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.346714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.346785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.363431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.363493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.379914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.379971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.396760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.396821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.413801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.413850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.430980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.431030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.446964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.447008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.457884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.457928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.473670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.473701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.490496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.490528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.506441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.506471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.516814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.387 [2024-12-09 10:51:41.516843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.387 [2024-12-09 10:51:41.533202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.388 [2024-12-09 10:51:41.533235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.388 [2024-12-09 10:51:41.549442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.388 [2024-12-09 10:51:41.549475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.567039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.567075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.583941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.583978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.600918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.600961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.617414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.617461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.633804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.633851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.650631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.650704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.666921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.666962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.684913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.684955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.700503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.700537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.716525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.716561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.733434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.733471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.646 [2024-12-09 10:51:41.749484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.646 [2024-12-09 10:51:41.749521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.647 [2024-12-09 10:51:41.764838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.647 [2024-12-09 10:51:41.764878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.647 [2024-12-09 10:51:41.781256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.647 [2024-12-09 10:51:41.781295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.647 [2024-12-09 10:51:41.796651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.647 [2024-12-09 10:51:41.796693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.647 [2024-12-09 10:51:41.811587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.647 [2024-12-09 10:51:41.811635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.906 14696.00 IOPS, 114.81 MiB/s [2024-12-09T10:51:42.085Z] [2024-12-09 10:51:41.827395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.906 [2024-12-09 10:51:41.827443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.906 [2024-12-09 10:51:41.841618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.906 [2024-12-09 10:51:41.841660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.906 [2024-12-09 10:51:41.856995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.906 [2024-12-09 10:51:41.857047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.906 [2024-12-09 10:51:41.873086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.906 [2024-12-09 10:51:41.873143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.906 [2024-12-09 10:51:41.884338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.906 [2024-12-09 10:51:41.884386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.906 [2024-12-09 10:51:41.899130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.906 [2024-12-09 10:51:41.899181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:41.910705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:41.910762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:41.926021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:41.926057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:41.942121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:41.942156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:41.958328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:41.958361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:41.973290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:41.973322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:41.989185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:41.989216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:42.003946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:42.003976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:42.019317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:42.019350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:42.034052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:42.034081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:42.049302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:42.049335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:42.064175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:42.064203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:48.907 [2024-12-09 10:51:42.079742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:48.907 [2024-12-09 10:51:42.079783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.094840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.094870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.111049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.111079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.127367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.127399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.145512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.145546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.161238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.161268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.177227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.177265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.188853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.188884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.203278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.203307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.218050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.218079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.166 [2024-12-09 10:51:42.233542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.166 [2024-12-09 10:51:42.233572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.248223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.248252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.259450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.259481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.274695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.274724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.289683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.289715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.304566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.304596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.318566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.318595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.167 [2024-12-09 10:51:42.333376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.167 [2024-12-09 10:51:42.333405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.344541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.344571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.359631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.359661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.374800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.374829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.389536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.389565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.405321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.405348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.419812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.419853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.431032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.431063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.445860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.445889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.461657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.461688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.476568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.476595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.496256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.496290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.511578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.511610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.527639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.527672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.538736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.538775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.546201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.546237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.556526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.556555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.564371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.564400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.579090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.579120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.426 [2024-12-09 10:51:42.593494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.426 [2024-12-09 10:51:42.593526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.604744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.604784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.619242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.619276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.630038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.630067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.645473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.645502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.661338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.661366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.677070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.677099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.690639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.690719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.706023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.706073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.721552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.721599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.736021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.736069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.749843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.749894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.768660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.768705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.784774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.784820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.799640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.799677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.810805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.810834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 15036.50 IOPS, 117.47 MiB/s [2024-12-09T10:51:42.869Z] [2024-12-09 10:51:42.826582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.826613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.841820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.841851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.690 [2024-12-09 10:51:42.855803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.690 [2024-12-09 10:51:42.855828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.870219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.870250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.880741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.880781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.895584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.895614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.906104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.906134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.921069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.921098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.936443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.936474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.951398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.951426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.967238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.967267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.981131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.981159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:42.996424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:42.996454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.011730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.011771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.026403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.026431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.042158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.042186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.056233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.056262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.071483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.071517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.087254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.087284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.101556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.101584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.112029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.112057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:49.957 [2024-12-09 10:51:43.126417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:49.957 [2024-12-09 10:51:43.126444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.140238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.140267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.154546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.154576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.165418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.165446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.179925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.179953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.190664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.190691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.205856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.205884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.221894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.221923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.233012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.233039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.247982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.248009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.263233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.263262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.277373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.277401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.291611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.291639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.302036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.302063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.316634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.316664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.328085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.328113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.342865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.342893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.357917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.357944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.371948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.371977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.217 [2024-12-09 10:51:43.385505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.217 [2024-12-09 10:51:43.385536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.399510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.399539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.413041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.413069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.426580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.426608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.440432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.440460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.454094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.454123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.467302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.467332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.481093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.481122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.499494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.499520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.514895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.514924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.528805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.528835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.542213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.542242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.556186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.556215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.569476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.569504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.583218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.583247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.596913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.596940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.610578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.610606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.624544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.624587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.637879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.637908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.477 [2024-12-09 10:51:43.651762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.477 [2024-12-09 10:51:43.651793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.665622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.665651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.679743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.679781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.693216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.693246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.706829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.706855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.720539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.720567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.733792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.733822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.748107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.748135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.762250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.762278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.773485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.773515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.787475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.787504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.800874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.800900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.814410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.814438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 15529.67 IOPS, 121.33 MiB/s [2024-12-09T10:51:43.915Z] [2024-12-09 10:51:43.827665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.827693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.841168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.841197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.854893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.854921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.868506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.868535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.883056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.883084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.736 [2024-12-09 10:51:43.899375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.736 [2024-12-09 10:51:43.899407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.914049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.914079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.929595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.929627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.943817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.943851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.954325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.954359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.969033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.969086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.984778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.984829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:43.999098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:43.999130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.013633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.013662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.024217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.024245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.038837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.038866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.049643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.049672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.064401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.064445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.075428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.075456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.090152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.090181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.105806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.105839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.120890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.120918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.136609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.136638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.150818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.150851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.996 [2024-12-09 10:51:44.161941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:50.996 [2024-12-09 10:51:44.161970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.177019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.177062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.193417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.193476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.204104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.204156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.218642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.218719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.232513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.232545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.247759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.247803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.262558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.262588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.276646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.276674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.290918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.290946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.306508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.306538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.321284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.321312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.332192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.332221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.346711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.346742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.362332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.362363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.376660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.376690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.390803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.390833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.405346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.405374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.416232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.416261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.255 [2024-12-09 10:51:44.431063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.255 [2024-12-09 10:51:44.431093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.514 [2024-12-09 10:51:44.446242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.514 [2024-12-09 10:51:44.446272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.514 [2024-12-09 10:51:44.460882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.460910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.476307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.476335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.490411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.490443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.504534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.504563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.518210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.518239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.532131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.532160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.545474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.545503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.559165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.559192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.573078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.573107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.586938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.586966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.601105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.601133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.616989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.617016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.630505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.630535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.644000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.644029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.657604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.657636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.672282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.672312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.515 [2024-12-09 10:51:44.688119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.515 [2024-12-09 10:51:44.688151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.702489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.702521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.713331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.713359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.727539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.727568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.740774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.740803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.754530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.754558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.768055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.768083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.782339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.782372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.796565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.796595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.810709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.810738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 15830.50 IOPS, 123.68 MiB/s [2024-12-09T10:51:44.953Z] [2024-12-09 10:51:44.825275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.825302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.836214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.836242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.851246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.851276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.866192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.866220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.880363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.880391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.893944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.893972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.908073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.908101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.922452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.922475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:51.774 [2024-12-09 10:51:44.937928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:51.774 [2024-12-09 10:51:44.937957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:44.952138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:44.952166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:44.965786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:44.965813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:44.980353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:44.980383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:44.991040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:44.991068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:45.005252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:45.005281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:45.018702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:45.018729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:45.032255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:45.032283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:45.046476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:45.046504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.041 [2024-12-09 10:51:45.057131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.041 [2024-12-09 10:51:45.057158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.071377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.071406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.084668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.084698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.098710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.098738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.112594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.112625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.126616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.126644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.140168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.140196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.153915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.153943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.167390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.167419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.181167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.181194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.195084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.195113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.042 [2024-12-09 10:51:45.209048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.042 [2024-12-09 10:51:45.209077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.220143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.220173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.234705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.234735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.245404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.245433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.260046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.260076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.274005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.274036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.287974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.288002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.302198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.302227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.316464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.316494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.327121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.327149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.341431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.341459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.302 [2024-12-09 10:51:45.351999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.302 [2024-12-09 10:51:45.352026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.366051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.366080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.379431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.379460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.393374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.393403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.406689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.406716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.420308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.420336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.433948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.433977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.448087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.448115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.461631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.461660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.303 [2024-12-09 10:51:45.475300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.303 [2024-12-09 10:51:45.475331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.489159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.489188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.502783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.502810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.516787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.516815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.530757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.530784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.544727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.544765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.558567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.558596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.572334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.572362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.586080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.586109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.599787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.599815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.613725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.613764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.627676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.627706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.642016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.642045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.653014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.653043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.667591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.667621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.679327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.679356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.693875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.693902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.704895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.704924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.719618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.719651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.562 [2024-12-09 10:51:45.730145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.562 [2024-12-09 10:51:45.730173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.744705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.744735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.755752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.755780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.770064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.770093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.783488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.783516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.797677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.797706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.811078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.811108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 16064.40 IOPS, 125.50 MiB/s [2024-12-09T10:51:46.001Z] [2024-12-09 10:51:45.822377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.822405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 00:08:52.822 Latency(us) 00:08:52.822 [2024-12-09T10:51:46.001Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.822 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:52.822 Nvme1n1 : 5.01 16067.68 125.53 0.00 0.00 7958.59 3162.33 18773.63 00:08:52.822 [2024-12-09T10:51:46.001Z] =================================================================================================================== 00:08:52.822 [2024-12-09T10:51:46.001Z] Total : 16067.68 125.53 0.00 0.00 7958.59 3162.33 18773.63 00:08:52.822 [2024-12-09 10:51:45.833413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.833438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.845375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.845408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.857356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.857391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.869332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.869361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.881311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.881339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.822 [2024-12-09 10:51:45.893290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.822 [2024-12-09 10:51:45.893323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.905266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.905295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.917249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.917276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.929225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.929252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.937201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.937219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.945187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.945206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.953177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.953200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.961162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.961182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.973141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.973159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.981125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.981143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.989113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.989131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.823 [2024-12-09 10:51:45.997098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:52.823 [2024-12-09 10:51:45.997116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.081 [2024-12-09 10:51:46.005084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.081 [2024-12-09 10:51:46.005102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.081 [2024-12-09 10:51:46.017065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.081 [2024-12-09 10:51:46.017085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.081 [2024-12-09 10:51:46.025051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:53.081 [2024-12-09 10:51:46.025069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:53.082 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65706) - No such process 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65706 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.082 delay0 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.082 10:51:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:53.082 [2024-12-09 10:51:46.234199] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:59.666 Initializing NVMe Controllers 00:08:59.666 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:59.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:59.666 Initialization complete. Launching workers. 00:08:59.666 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 84 00:08:59.666 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 371, failed to submit 33 00:08:59.666 success 244, unsuccessful 127, failed 0 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.666 rmmod nvme_tcp 00:08:59.666 rmmod nvme_fabrics 00:08:59.666 rmmod nvme_keyring 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65556 ']' 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65556 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65556 ']' 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65556 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65556 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:59.666 killing process with pid 65556 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65556' 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65556 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65556 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:59.666 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:59.926 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:59.926 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:59.927 00:08:59.927 real 0m25.036s 00:08:59.927 user 0m41.295s 00:08:59.927 sys 0m6.528s 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.927 10:51:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.927 ************************************ 00:08:59.927 END TEST nvmf_zcopy 00:08:59.927 ************************************ 00:08:59.927 10:51:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:59.927 10:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.927 10:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.927 10:51:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.927 ************************************ 00:08:59.927 START TEST nvmf_nmic 00:08:59.927 ************************************ 00:08:59.927 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:00.323 * Looking for test storage... 00:09:00.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.323 --rc genhtml_branch_coverage=1 00:09:00.323 --rc genhtml_function_coverage=1 00:09:00.323 --rc genhtml_legend=1 00:09:00.323 --rc geninfo_all_blocks=1 00:09:00.323 --rc geninfo_unexecuted_blocks=1 00:09:00.323 00:09:00.323 ' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.323 --rc genhtml_branch_coverage=1 00:09:00.323 --rc genhtml_function_coverage=1 00:09:00.323 --rc genhtml_legend=1 00:09:00.323 --rc geninfo_all_blocks=1 00:09:00.323 --rc geninfo_unexecuted_blocks=1 00:09:00.323 00:09:00.323 ' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.323 --rc genhtml_branch_coverage=1 00:09:00.323 --rc genhtml_function_coverage=1 00:09:00.323 --rc genhtml_legend=1 00:09:00.323 --rc geninfo_all_blocks=1 00:09:00.323 --rc geninfo_unexecuted_blocks=1 00:09:00.323 00:09:00.323 ' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.323 --rc genhtml_branch_coverage=1 00:09:00.323 --rc genhtml_function_coverage=1 00:09:00.323 --rc genhtml_legend=1 00:09:00.323 --rc geninfo_all_blocks=1 00:09:00.323 --rc geninfo_unexecuted_blocks=1 00:09:00.323 00:09:00.323 ' 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.323 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.324 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:00.324 Cannot find device "nvmf_init_br" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:00.324 Cannot find device "nvmf_init_br2" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:00.324 Cannot find device "nvmf_tgt_br" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.324 Cannot find device "nvmf_tgt_br2" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:00.324 Cannot find device "nvmf_init_br" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:00.324 Cannot find device "nvmf_init_br2" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:00.324 Cannot find device "nvmf_tgt_br" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:00.324 Cannot find device "nvmf_tgt_br2" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:00.324 Cannot find device "nvmf_br" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:00.324 Cannot find device "nvmf_init_if" 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:00.324 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:00.584 Cannot find device "nvmf_init_if2" 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:00.584 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:00.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:00.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:09:00.585 00:09:00.585 --- 10.0.0.3 ping statistics --- 00:09:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.585 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:00.585 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:00.585 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:09:00.585 00:09:00.585 --- 10.0.0.4 ping statistics --- 00:09:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.585 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:00.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:09:00.585 00:09:00.585 --- 10.0.0.1 ping statistics --- 00:09:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.585 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:00.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:00.585 00:09:00.585 --- 10.0.0.2 ping statistics --- 00:09:00.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.585 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.585 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66083 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66083 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66083 ']' 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.844 10:51:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.844 [2024-12-09 10:51:53.824229] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:09:00.844 [2024-12-09 10:51:53.824285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.844 [2024-12-09 10:51:53.976504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.103 [2024-12-09 10:51:54.029760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.103 [2024-12-09 10:51:54.029887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.103 [2024-12-09 10:51:54.029940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.103 [2024-12-09 10:51:54.029967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.103 [2024-12-09 10:51:54.029983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.103 [2024-12-09 10:51:54.030954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.103 [2024-12-09 10:51:54.031050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.103 [2024-12-09 10:51:54.031120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.103 [2024-12-09 10:51:54.031139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.103 [2024-12-09 10:51:54.072698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.672 [2024-12-09 10:51:54.808003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.672 Malloc0 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.672 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.931 [2024-12-09 10:51:54.879446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.931 test case1: single bdev can't be used in multiple subsystems 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.931 [2024-12-09 10:51:54.915273] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:01.931 [2024-12-09 10:51:54.915299] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:01.931 [2024-12-09 10:51:54.915307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.931 request: 00:09:01.931 { 00:09:01.931 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:01.931 "namespace": { 00:09:01.931 "bdev_name": "Malloc0", 00:09:01.931 "no_auto_visible": false, 00:09:01.931 "hide_metadata": false 00:09:01.931 }, 00:09:01.931 "method": "nvmf_subsystem_add_ns", 00:09:01.931 "req_id": 1 00:09:01.931 } 00:09:01.931 Got JSON-RPC error response 00:09:01.931 response: 00:09:01.931 { 00:09:01.931 "code": -32602, 00:09:01.931 "message": "Invalid parameters" 00:09:01.931 } 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:01.931 Adding namespace failed - expected result. 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:01.931 test case2: host connect to nvmf target in multiple paths 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.931 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:01.932 [2024-12-09 10:51:54.931361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:01.932 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.932 10:51:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:01.932 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:02.192 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.192 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:02.192 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.192 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:02.192 10:51:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:04.100 10:51:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.100 [global] 00:09:04.100 thread=1 00:09:04.100 invalidate=1 00:09:04.100 rw=write 00:09:04.100 time_based=1 00:09:04.100 runtime=1 00:09:04.100 ioengine=libaio 00:09:04.100 direct=1 00:09:04.100 bs=4096 00:09:04.100 iodepth=1 00:09:04.100 norandommap=0 00:09:04.100 numjobs=1 00:09:04.100 00:09:04.100 verify_dump=1 00:09:04.100 verify_backlog=512 00:09:04.100 verify_state_save=0 00:09:04.100 do_verify=1 00:09:04.100 verify=crc32c-intel 00:09:04.100 [job0] 00:09:04.100 filename=/dev/nvme0n1 00:09:04.359 Could not set queue depth (nvme0n1) 00:09:04.360 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.360 fio-3.35 00:09:04.360 Starting 1 thread 00:09:05.738 00:09:05.738 job0: (groupid=0, jobs=1): err= 0: pid=66180: Mon Dec 9 10:51:58 2024 00:09:05.738 read: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec) 00:09:05.738 slat (usec): min=5, max=107, avg= 7.68, stdev= 4.24 00:09:05.738 clat (usec): min=56, max=416, avg=139.57, stdev=18.30 00:09:05.738 lat (usec): min=101, max=426, avg=147.25, stdev=19.12 00:09:05.738 clat percentiles (usec): 00:09:05.738 | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 117], 20.00th=[ 126], 00:09:05.738 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 143], 00:09:05.738 | 70.00th=[ 147], 80.00th=[ 153], 90.00th=[ 161], 95.00th=[ 169], 00:09:05.738 | 99.00th=[ 192], 99.50th=[ 204], 99.90th=[ 231], 99.95th=[ 318], 00:09:05.738 | 99.99th=[ 416] 00:09:05.738 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:09:05.738 slat (usec): min=8, max=122, avg=12.46, stdev= 7.98 00:09:05.738 clat (usec): min=61, max=371, avg=86.03, stdev=13.77 00:09:05.738 lat (usec): min=70, max=380, avg=98.49, stdev=17.61 00:09:05.738 clat percentiles (usec): 00:09:05.738 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 71], 20.00th=[ 75], 00:09:05.738 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 86], 60.00th=[ 89], 00:09:05.738 | 70.00th=[ 92], 80.00th=[ 95], 90.00th=[ 101], 95.00th=[ 109], 00:09:05.738 | 99.00th=[ 120], 99.50th=[ 126], 99.90th=[ 159], 99.95th=[ 239], 00:09:05.738 | 99.99th=[ 371] 00:09:05.738 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:09:05.738 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:05.738 lat (usec) : 100=45.19%, 250=54.75%, 500=0.06% 00:09:05.738 cpu : usr=1.60%, sys=6.70%, ctx=8093, majf=0, minf=5 00:09:05.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.738 issued rwts: total=3997,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.738 00:09:05.738 Run status group 0 (all jobs): 00:09:05.738 READ: bw=15.6MiB/s (16.4MB/s), 15.6MiB/s-15.6MiB/s (16.4MB/s-16.4MB/s), io=15.6MiB (16.4MB), run=1001-1001msec 00:09:05.738 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:09:05.738 00:09:05.738 Disk stats (read/write): 00:09:05.738 nvme0n1: ios=3634/3747, merge=0/0, ticks=539/342, in_queue=881, util=91.17% 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.738 rmmod nvme_tcp 00:09:05.738 rmmod nvme_fabrics 00:09:05.738 rmmod nvme_keyring 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66083 ']' 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66083 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66083 ']' 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66083 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66083 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.738 killing process with pid 66083 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66083' 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66083 00:09:05.738 10:51:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66083 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.997 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:05.998 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:05.998 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:05.998 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:05.998 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:06.256 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:06.256 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:06.256 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.256 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:06.257 00:09:06.257 real 0m6.279s 00:09:06.257 user 0m19.331s 00:09:06.257 sys 0m2.119s 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:06.257 ************************************ 00:09:06.257 END TEST nvmf_nmic 00:09:06.257 ************************************ 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.257 ************************************ 00:09:06.257 START TEST nvmf_fio_target 00:09:06.257 ************************************ 00:09:06.257 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:06.516 * Looking for test storage... 00:09:06.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:06.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.516 --rc genhtml_branch_coverage=1 00:09:06.516 --rc genhtml_function_coverage=1 00:09:06.516 --rc genhtml_legend=1 00:09:06.516 --rc geninfo_all_blocks=1 00:09:06.516 --rc geninfo_unexecuted_blocks=1 00:09:06.516 00:09:06.516 ' 00:09:06.516 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:06.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.516 --rc genhtml_branch_coverage=1 00:09:06.516 --rc genhtml_function_coverage=1 00:09:06.516 --rc genhtml_legend=1 00:09:06.517 --rc geninfo_all_blocks=1 00:09:06.517 --rc geninfo_unexecuted_blocks=1 00:09:06.517 00:09:06.517 ' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:06.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.517 --rc genhtml_branch_coverage=1 00:09:06.517 --rc genhtml_function_coverage=1 00:09:06.517 --rc genhtml_legend=1 00:09:06.517 --rc geninfo_all_blocks=1 00:09:06.517 --rc geninfo_unexecuted_blocks=1 00:09:06.517 00:09:06.517 ' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:06.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.517 --rc genhtml_branch_coverage=1 00:09:06.517 --rc genhtml_function_coverage=1 00:09:06.517 --rc genhtml_legend=1 00:09:06.517 --rc geninfo_all_blocks=1 00:09:06.517 --rc geninfo_unexecuted_blocks=1 00:09:06.517 00:09:06.517 ' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.517 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:06.517 Cannot find device "nvmf_init_br" 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:06.517 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:06.776 Cannot find device "nvmf_init_br2" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:06.776 Cannot find device "nvmf_tgt_br" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:06.776 Cannot find device "nvmf_tgt_br2" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:06.776 Cannot find device "nvmf_init_br" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:06.776 Cannot find device "nvmf_init_br2" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:06.776 Cannot find device "nvmf_tgt_br" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:06.776 Cannot find device "nvmf_tgt_br2" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:06.776 Cannot find device "nvmf_br" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:06.776 Cannot find device "nvmf_init_if" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:06.776 Cannot find device "nvmf_init_if2" 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:06.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:06.776 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:06.776 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:07.035 10:51:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:07.035 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:07.035 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:07.035 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:07.035 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:07.035 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:07.035 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:07.036 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:07.036 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:09:07.036 00:09:07.036 --- 10.0.0.3 ping statistics --- 00:09:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.036 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:07.036 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:07.036 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:09:07.036 00:09:07.036 --- 10.0.0.4 ping statistics --- 00:09:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.036 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:07.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:09:07.036 00:09:07.036 --- 10.0.0.1 ping statistics --- 00:09:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.036 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:07.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:09:07.036 00:09:07.036 --- 10.0.0.2 ping statistics --- 00:09:07.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.036 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66408 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66408 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66408 ']' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.036 10:52:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:07.036 [2024-12-09 10:52:00.211462] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:09:07.036 [2024-12-09 10:52:00.211512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.295 [2024-12-09 10:52:00.362097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.295 [2024-12-09 10:52:00.411544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.295 [2024-12-09 10:52:00.411581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.295 [2024-12-09 10:52:00.411586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.295 [2024-12-09 10:52:00.411591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.295 [2024-12-09 10:52:00.411596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.295 [2024-12-09 10:52:00.412449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.295 [2024-12-09 10:52:00.413782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.295 [2024-12-09 10:52:00.413882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.295 [2024-12-09 10:52:00.413885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.295 [2024-12-09 10:52:00.454864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:08.228 [2024-12-09 10:52:01.312842] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.228 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.486 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:08.486 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:08.744 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:08.744 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.001 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:09.001 10:52:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.258 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:09.258 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:09.258 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.516 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:09.516 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.776 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:09.776 10:52:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.036 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:10.036 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:10.294 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:10.294 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:10.294 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.551 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:10.551 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:10.834 10:52:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:11.092 [2024-12-09 10:52:04.049131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.092 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:11.350 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:11.350 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:11.608 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:11.608 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:11.608 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.608 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:11.608 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:11.608 10:52:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:13.506 10:52:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:13.506 [global] 00:09:13.506 thread=1 00:09:13.506 invalidate=1 00:09:13.506 rw=write 00:09:13.506 time_based=1 00:09:13.506 runtime=1 00:09:13.506 ioengine=libaio 00:09:13.506 direct=1 00:09:13.506 bs=4096 00:09:13.506 iodepth=1 00:09:13.506 norandommap=0 00:09:13.506 numjobs=1 00:09:13.506 00:09:13.506 verify_dump=1 00:09:13.506 verify_backlog=512 00:09:13.506 verify_state_save=0 00:09:13.506 do_verify=1 00:09:13.506 verify=crc32c-intel 00:09:13.506 [job0] 00:09:13.506 filename=/dev/nvme0n1 00:09:13.506 [job1] 00:09:13.506 filename=/dev/nvme0n2 00:09:13.506 [job2] 00:09:13.506 filename=/dev/nvme0n3 00:09:13.764 [job3] 00:09:13.764 filename=/dev/nvme0n4 00:09:13.764 Could not set queue depth (nvme0n1) 00:09:13.764 Could not set queue depth (nvme0n2) 00:09:13.764 Could not set queue depth (nvme0n3) 00:09:13.764 Could not set queue depth (nvme0n4) 00:09:13.764 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.764 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.764 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.764 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.764 fio-3.35 00:09:13.764 Starting 4 threads 00:09:15.139 00:09:15.139 job0: (groupid=0, jobs=1): err= 0: pid=66587: Mon Dec 9 10:52:08 2024 00:09:15.139 read: IOPS=2225, BW=8903KiB/s (9117kB/s)(8912KiB/1001msec) 00:09:15.139 slat (nsec): min=4356, max=32160, avg=5912.14, stdev=1840.81 00:09:15.139 clat (usec): min=179, max=298, avg=223.84, stdev=16.33 00:09:15.139 lat (usec): min=184, max=304, avg=229.75, stdev=16.93 00:09:15.139 clat percentiles (usec): 00:09:15.139 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:09:15.139 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:09:15.139 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 245], 95.00th=[ 253], 00:09:15.139 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 297], 00:09:15.139 | 99.99th=[ 297] 00:09:15.139 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:15.139 slat (usec): min=5, max=113, avg=11.16, stdev= 6.58 00:09:15.139 clat (usec): min=93, max=394, avg=177.92, stdev=16.34 00:09:15.139 lat (usec): min=132, max=507, avg=189.08, stdev=18.93 00:09:15.139 clat percentiles (usec): 00:09:15.139 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:09:15.139 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:09:15.139 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 206], 00:09:15.139 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 262], 99.95th=[ 265], 00:09:15.139 | 99.99th=[ 396] 00:09:15.139 bw ( KiB/s): min=10776, max=10776, per=21.07%, avg=10776.00, stdev= 0.00, samples=1 00:09:15.139 iops : min= 2694, max= 2694, avg=2694.00, stdev= 0.00, samples=1 00:09:15.139 lat (usec) : 100=0.06%, 250=97.06%, 500=2.88% 00:09:15.139 cpu : usr=0.80%, sys=3.70%, ctx=4789, majf=0, minf=19 00:09:15.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.139 issued rwts: total=2228,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.139 job1: (groupid=0, jobs=1): err= 0: pid=66588: Mon Dec 9 10:52:08 2024 00:09:15.139 read: IOPS=3789, BW=14.8MiB/s (15.5MB/s)(14.8MiB/1000msec) 00:09:15.139 slat (nsec): min=6060, max=26080, avg=6910.54, stdev=1178.81 00:09:15.139 clat (usec): min=111, max=5272, avg=138.65, stdev=102.63 00:09:15.139 lat (usec): min=118, max=5280, avg=145.57, stdev=102.84 00:09:15.139 clat percentiles (usec): 00:09:15.139 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 127], 00:09:15.139 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:09:15.139 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:09:15.139 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 523], 99.95th=[ 3261], 00:09:15.139 | 99.99th=[ 5276] 00:09:15.139 write: IOPS=4096, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1000msec); 0 zone resets 00:09:15.139 slat (usec): min=9, max=111, avg=10.51, stdev= 2.80 00:09:15.139 clat (usec): min=72, max=1635, avg=97.27, stdev=26.06 00:09:15.139 lat (usec): min=81, max=1645, avg=107.78, stdev=26.40 00:09:15.139 clat percentiles (usec): 00:09:15.139 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:09:15.139 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:09:15.139 | 70.00th=[ 100], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 114], 00:09:15.139 | 99.00th=[ 125], 99.50th=[ 133], 99.90th=[ 157], 99.95th=[ 241], 00:09:15.139 | 99.99th=[ 1631] 00:09:15.139 bw ( KiB/s): min=16384, max=16384, per=32.03%, avg=16384.00, stdev= 0.00, samples=1 00:09:15.139 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:15.139 lat (usec) : 100=36.06%, 250=63.87%, 500=0.01%, 750=0.01% 00:09:15.139 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:09:15.139 cpu : usr=1.20%, sys=5.80%, ctx=7887, majf=0, minf=9 00:09:15.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.139 issued rwts: total=3789,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.139 job2: (groupid=0, jobs=1): err= 0: pid=66590: Mon Dec 9 10:52:08 2024 00:09:15.139 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(13.9MiB/1001msec) 00:09:15.139 slat (nsec): min=6236, max=29677, avg=6952.30, stdev=1078.97 00:09:15.139 clat (usec): min=119, max=419, avg=149.16, stdev=12.34 00:09:15.139 lat (usec): min=126, max=426, avg=156.12, stdev=12.53 00:09:15.139 clat percentiles (usec): 00:09:15.139 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:09:15.139 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:09:15.139 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 172], 00:09:15.139 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 210], 99.95th=[ 219], 00:09:15.139 | 99.99th=[ 420] 00:09:15.139 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:15.139 slat (usec): min=9, max=114, avg=11.44, stdev= 5.53 00:09:15.139 clat (usec): min=85, max=263, avg=110.27, stdev=11.88 00:09:15.139 lat (usec): min=95, max=298, avg=121.71, stdev=14.26 00:09:15.139 clat percentiles (usec): 00:09:15.139 | 1.00th=[ 91], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 101], 00:09:15.139 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 112], 00:09:15.139 | 70.00th=[ 115], 80.00th=[ 119], 90.00th=[ 125], 95.00th=[ 131], 00:09:15.139 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 190], 99.95th=[ 210], 00:09:15.139 | 99.99th=[ 265] 00:09:15.139 bw ( KiB/s): min=16384, max=16384, per=32.03%, avg=16384.00, stdev= 0.00, samples=1 00:09:15.139 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:15.139 lat (usec) : 100=8.11%, 250=91.86%, 500=0.03% 00:09:15.140 cpu : usr=1.20%, sys=5.50%, ctx=7154, majf=0, minf=11 00:09:15.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.140 issued rwts: total=3570,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.140 job3: (groupid=0, jobs=1): err= 0: pid=66593: Mon Dec 9 10:52:08 2024 00:09:15.140 read: IOPS=2228, BW=8915KiB/s (9129kB/s)(8924KiB/1001msec) 00:09:15.140 slat (nsec): min=4413, max=23283, avg=6760.54, stdev=1111.25 00:09:15.140 clat (usec): min=120, max=296, avg=222.68, stdev=16.94 00:09:15.140 lat (usec): min=134, max=308, avg=229.44, stdev=17.02 00:09:15.140 clat percentiles (usec): 00:09:15.140 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:09:15.140 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:09:15.140 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 253], 00:09:15.140 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 293], 99.95th=[ 293], 00:09:15.140 | 99.99th=[ 297] 00:09:15.140 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:15.140 slat (nsec): min=6213, max=99551, avg=11965.21, stdev=5672.26 00:09:15.140 clat (usec): min=101, max=272, avg=177.04, stdev=15.89 00:09:15.140 lat (usec): min=150, max=304, avg=189.00, stdev=17.75 00:09:15.140 clat percentiles (usec): 00:09:15.140 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:15.140 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:09:15.140 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:09:15.140 | 99.00th=[ 225], 99.50th=[ 235], 99.90th=[ 255], 99.95th=[ 258], 00:09:15.140 | 99.99th=[ 273] 00:09:15.140 bw ( KiB/s): min=10776, max=10776, per=21.07%, avg=10776.00, stdev= 0.00, samples=1 00:09:15.140 iops : min= 2694, max= 2694, avg=2694.00, stdev= 0.00, samples=1 00:09:15.140 lat (usec) : 250=97.37%, 500=2.63% 00:09:15.140 cpu : usr=1.20%, sys=4.00%, ctx=4791, majf=0, minf=9 00:09:15.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:15.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.140 issued rwts: total=2231,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:15.140 00:09:15.140 Run status group 0 (all jobs): 00:09:15.140 READ: bw=46.1MiB/s (48.4MB/s), 8903KiB/s-14.8MiB/s (9117kB/s-15.5MB/s), io=46.2MiB (48.4MB), run=1000-1001msec 00:09:15.140 WRITE: bw=49.9MiB/s (52.4MB/s), 9.99MiB/s-16.0MiB/s (10.5MB/s-16.8MB/s), io=50.0MiB (52.4MB), run=1000-1001msec 00:09:15.140 00:09:15.140 Disk stats (read/write): 00:09:15.140 nvme0n1: ios=2098/2134, merge=0/0, ticks=484/358, in_queue=842, util=89.98% 00:09:15.140 nvme0n2: ios=3468/3584, merge=0/0, ticks=495/369, in_queue=864, util=90.12% 00:09:15.140 nvme0n3: ios=3109/3243, merge=0/0, ticks=498/371, in_queue=869, util=90.43% 00:09:15.140 nvme0n4: ios=2075/2135, merge=0/0, ticks=495/377, in_queue=872, util=90.50% 00:09:15.140 10:52:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:15.140 [global] 00:09:15.140 thread=1 00:09:15.140 invalidate=1 00:09:15.140 rw=randwrite 00:09:15.140 time_based=1 00:09:15.140 runtime=1 00:09:15.140 ioengine=libaio 00:09:15.140 direct=1 00:09:15.140 bs=4096 00:09:15.140 iodepth=1 00:09:15.140 norandommap=0 00:09:15.140 numjobs=1 00:09:15.140 00:09:15.140 verify_dump=1 00:09:15.140 verify_backlog=512 00:09:15.140 verify_state_save=0 00:09:15.140 do_verify=1 00:09:15.140 verify=crc32c-intel 00:09:15.140 [job0] 00:09:15.140 filename=/dev/nvme0n1 00:09:15.140 [job1] 00:09:15.140 filename=/dev/nvme0n2 00:09:15.140 [job2] 00:09:15.140 filename=/dev/nvme0n3 00:09:15.140 [job3] 00:09:15.140 filename=/dev/nvme0n4 00:09:15.140 Could not set queue depth (nvme0n1) 00:09:15.140 Could not set queue depth (nvme0n2) 00:09:15.140 Could not set queue depth (nvme0n3) 00:09:15.140 Could not set queue depth (nvme0n4) 00:09:15.140 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.140 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.140 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.140 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:15.140 fio-3.35 00:09:15.140 Starting 4 threads 00:09:16.514 00:09:16.514 job0: (groupid=0, jobs=1): err= 0: pid=66647: Mon Dec 9 10:52:09 2024 00:09:16.514 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:16.514 slat (nsec): min=5982, max=32148, avg=6947.02, stdev=1507.36 00:09:16.514 clat (usec): min=133, max=721, avg=243.09, stdev=25.22 00:09:16.514 lat (usec): min=139, max=728, avg=250.04, stdev=25.50 00:09:16.514 clat percentiles (usec): 00:09:16.514 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:09:16.514 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:09:16.514 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 281], 00:09:16.514 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 449], 99.95th=[ 594], 00:09:16.514 | 99.99th=[ 725] 00:09:16.514 write: IOPS=2421, BW=9686KiB/s (9919kB/s)(9696KiB/1001msec); 0 zone resets 00:09:16.514 slat (usec): min=9, max=147, avg=12.02, stdev= 6.40 00:09:16.514 clat (usec): min=88, max=308, avg=187.52, stdev=17.89 00:09:16.514 lat (usec): min=99, max=455, avg=199.54, stdev=20.06 00:09:16.514 clat percentiles (usec): 00:09:16.514 | 1.00th=[ 153], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 176], 00:09:16.514 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:09:16.514 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 210], 95.00th=[ 219], 00:09:16.514 | 99.00th=[ 235], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 302], 00:09:16.514 | 99.99th=[ 310] 00:09:16.514 bw ( KiB/s): min= 9336, max= 9336, per=18.81%, avg=9336.00, stdev= 0.00, samples=1 00:09:16.514 iops : min= 2334, max= 2334, avg=2334.00, stdev= 0.00, samples=1 00:09:16.514 lat (usec) : 100=0.13%, 250=85.62%, 500=14.20%, 750=0.04% 00:09:16.514 cpu : usr=0.80%, sys=3.60%, ctx=4473, majf=0, minf=15 00:09:16.514 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.514 issued rwts: total=2048,2424,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.514 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.514 job1: (groupid=0, jobs=1): err= 0: pid=66648: Mon Dec 9 10:52:09 2024 00:09:16.514 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:09:16.514 slat (nsec): min=6186, max=35858, avg=7199.21, stdev=1591.13 00:09:16.514 clat (usec): min=109, max=4806, avg=142.85, stdev=138.23 00:09:16.514 lat (usec): min=116, max=4812, avg=150.05, stdev=138.48 00:09:16.514 clat percentiles (usec): 00:09:16.514 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:09:16.514 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:09:16.514 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:09:16.514 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 3785], 99.95th=[ 3982], 00:09:16.514 | 99.99th=[ 4817] 00:09:16.514 write: IOPS=3992, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:09:16.514 slat (usec): min=9, max=155, avg=11.50, stdev= 5.57 00:09:16.514 clat (usec): min=73, max=2577, avg=102.42, stdev=66.94 00:09:16.514 lat (usec): min=83, max=2608, avg=113.92, stdev=67.91 00:09:16.514 clat percentiles (usec): 00:09:16.514 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 91], 00:09:16.514 | 30.00th=[ 94], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 101], 00:09:16.515 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 121], 00:09:16.515 | 99.00th=[ 143], 99.50th=[ 157], 99.90th=[ 848], 99.95th=[ 2376], 00:09:16.515 | 99.99th=[ 2573] 00:09:16.515 bw ( KiB/s): min=16384, max=16384, per=33.00%, avg=16384.00, stdev= 0.00, samples=1 00:09:16.515 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:09:16.515 lat (usec) : 100=29.64%, 250=70.12%, 500=0.07%, 750=0.04%, 1000=0.01% 00:09:16.515 lat (msec) : 2=0.03%, 4=0.08%, 10=0.01% 00:09:16.515 cpu : usr=1.60%, sys=5.70%, ctx=7580, majf=0, minf=13 00:09:16.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.515 issued rwts: total=3584,3996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.515 job2: (groupid=0, jobs=1): err= 0: pid=66649: Mon Dec 9 10:52:09 2024 00:09:16.515 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:09:16.515 slat (nsec): min=6020, max=29113, avg=7933.10, stdev=2457.38 00:09:16.515 clat (usec): min=137, max=477, avg=241.41, stdev=20.99 00:09:16.515 lat (usec): min=149, max=488, avg=249.35, stdev=21.31 00:09:16.515 clat percentiles (usec): 00:09:16.515 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 227], 00:09:16.515 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:09:16.515 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 277], 00:09:16.515 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 371], 99.95th=[ 408], 00:09:16.515 | 99.99th=[ 478] 00:09:16.515 write: IOPS=2417, BW=9670KiB/s (9902kB/s)(9680KiB/1001msec); 0 zone resets 00:09:16.515 slat (usec): min=9, max=140, avg=14.22, stdev= 9.30 00:09:16.515 clat (usec): min=94, max=321, avg=185.94, stdev=19.09 00:09:16.515 lat (usec): min=111, max=462, avg=200.16, stdev=20.76 00:09:16.515 clat percentiles (usec): 00:09:16.515 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:09:16.515 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:16.515 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 219], 00:09:16.515 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 310], 99.95th=[ 318], 00:09:16.515 | 99.99th=[ 322] 00:09:16.515 bw ( KiB/s): min= 9296, max= 9296, per=18.72%, avg=9296.00, stdev= 0.00, samples=1 00:09:16.515 iops : min= 2324, max= 2324, avg=2324.00, stdev= 0.00, samples=1 00:09:16.515 lat (usec) : 100=0.02%, 250=86.15%, 500=13.83% 00:09:16.515 cpu : usr=0.80%, sys=4.40%, ctx=4468, majf=0, minf=11 00:09:16.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.515 issued rwts: total=2048,2420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.515 job3: (groupid=0, jobs=1): err= 0: pid=66650: Mon Dec 9 10:52:09 2024 00:09:16.515 read: IOPS=3421, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1001msec) 00:09:16.515 slat (nsec): min=6248, max=51482, avg=7199.32, stdev=1739.06 00:09:16.515 clat (usec): min=123, max=273, avg=152.79, stdev=14.82 00:09:16.515 lat (usec): min=129, max=282, avg=159.99, stdev=15.25 00:09:16.515 clat percentiles (usec): 00:09:16.515 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 141], 00:09:16.515 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 155], 00:09:16.515 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 178], 00:09:16.515 | 99.00th=[ 204], 99.50th=[ 219], 99.90th=[ 265], 99.95th=[ 269], 00:09:16.515 | 99.99th=[ 273] 00:09:16.515 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:09:16.515 slat (usec): min=9, max=114, avg=11.53, stdev= 5.50 00:09:16.515 clat (usec): min=74, max=1840, avg=112.77, stdev=37.68 00:09:16.515 lat (usec): min=85, max=1850, avg=124.31, stdev=38.60 00:09:16.515 clat percentiles (usec): 00:09:16.515 | 1.00th=[ 90], 5.00th=[ 95], 10.00th=[ 98], 20.00th=[ 102], 00:09:16.515 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 114], 00:09:16.515 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 135], 00:09:16.515 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 322], 99.95th=[ 1254], 00:09:16.515 | 99.99th=[ 1844] 00:09:16.515 bw ( KiB/s): min=15656, max=15656, per=31.54%, avg=15656.00, stdev= 0.00, samples=1 00:09:16.515 iops : min= 3914, max= 3914, avg=3914.00, stdev= 0.00, samples=1 00:09:16.515 lat (usec) : 100=7.72%, 250=92.12%, 500=0.11%, 750=0.01% 00:09:16.515 lat (msec) : 2=0.03% 00:09:16.515 cpu : usr=1.30%, sys=5.40%, ctx=7024, majf=0, minf=7 00:09:16.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:16.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.515 issued rwts: total=3425,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:16.515 00:09:16.515 Run status group 0 (all jobs): 00:09:16.515 READ: bw=43.3MiB/s (45.4MB/s), 8184KiB/s-14.0MiB/s (8380kB/s-14.7MB/s), io=43.4MiB (45.5MB), run=1001-1001msec 00:09:16.515 WRITE: bw=48.5MiB/s (50.8MB/s), 9670KiB/s-15.6MiB/s (9902kB/s-16.4MB/s), io=48.5MiB (50.9MB), run=1001-1001msec 00:09:16.515 00:09:16.515 Disk stats (read/write): 00:09:16.515 nvme0n1: ios=1912/2048, merge=0/0, ticks=494/390, in_queue=884, util=89.98% 00:09:16.515 nvme0n2: ios=3123/3584, merge=0/0, ticks=465/371, in_queue=836, util=89.03% 00:09:16.515 nvme0n3: ios=1900/2048, merge=0/0, ticks=475/397, in_queue=872, util=90.45% 00:09:16.515 nvme0n4: ios=3098/3072, merge=0/0, ticks=501/358, in_queue=859, util=90.42% 00:09:16.515 10:52:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:16.515 [global] 00:09:16.515 thread=1 00:09:16.515 invalidate=1 00:09:16.515 rw=write 00:09:16.515 time_based=1 00:09:16.515 runtime=1 00:09:16.515 ioengine=libaio 00:09:16.515 direct=1 00:09:16.515 bs=4096 00:09:16.515 iodepth=128 00:09:16.515 norandommap=0 00:09:16.515 numjobs=1 00:09:16.515 00:09:16.515 verify_dump=1 00:09:16.515 verify_backlog=512 00:09:16.515 verify_state_save=0 00:09:16.515 do_verify=1 00:09:16.515 verify=crc32c-intel 00:09:16.515 [job0] 00:09:16.515 filename=/dev/nvme0n1 00:09:16.515 [job1] 00:09:16.515 filename=/dev/nvme0n2 00:09:16.515 [job2] 00:09:16.515 filename=/dev/nvme0n3 00:09:16.515 [job3] 00:09:16.515 filename=/dev/nvme0n4 00:09:16.515 Could not set queue depth (nvme0n1) 00:09:16.515 Could not set queue depth (nvme0n2) 00:09:16.515 Could not set queue depth (nvme0n3) 00:09:16.515 Could not set queue depth (nvme0n4) 00:09:16.773 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.773 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.773 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.773 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.773 fio-3.35 00:09:16.773 Starting 4 threads 00:09:17.708 00:09:17.708 job0: (groupid=0, jobs=1): err= 0: pid=66709: Mon Dec 9 10:52:10 2024 00:09:17.708 read: IOPS=6137, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1001msec) 00:09:17.708 slat (usec): min=15, max=2033, avg=68.05, stdev=187.71 00:09:17.708 clat (usec): min=7881, max=18738, avg=9443.42, stdev=1216.59 00:09:17.708 lat (usec): min=8052, max=20251, avg=9511.47, stdev=1215.57 00:09:17.708 clat percentiles (usec): 00:09:17.708 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[ 8848], 00:09:17.708 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:09:17.708 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:09:17.708 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:09:17.708 | 99.99th=[18744] 00:09:17.708 write: IOPS=6628, BW=25.9MiB/s (27.1MB/s)(25.9MiB/1001msec); 0 zone resets 00:09:17.708 slat (usec): min=12, max=13281, avg=77.53, stdev=343.66 00:09:17.708 clat (usec): min=342, max=42566, avg=10046.76, stdev=5288.45 00:09:17.708 lat (usec): min=375, max=42596, avg=10124.29, stdev=5323.02 00:09:17.708 clat percentiles (usec): 00:09:17.708 | 1.00th=[ 4817], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8356], 00:09:17.708 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:09:17.708 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[13304], 95.00th=[17433], 00:09:17.708 | 99.00th=[38536], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:09:17.708 | 99.99th=[42730] 00:09:17.708 bw ( KiB/s): min=23592, max=23592, per=41.54%, avg=23592.00, stdev= 0.00, samples=1 00:09:17.708 iops : min= 5898, max= 5898, avg=5898.00, stdev= 0.00, samples=1 00:09:17.708 lat (usec) : 500=0.03% 00:09:17.708 lat (msec) : 2=0.09%, 4=0.20%, 10=89.50%, 20=8.00%, 50=2.18% 00:09:17.708 cpu : usr=5.60%, sys=28.90%, ctx=672, majf=0, minf=17 00:09:17.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:17.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.708 issued rwts: total=6144,6635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.708 job1: (groupid=0, jobs=1): err= 0: pid=66710: Mon Dec 9 10:52:10 2024 00:09:17.708 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:09:17.708 slat (usec): min=7, max=9273, avg=246.96, stdev=879.19 00:09:17.708 clat (usec): min=14330, max=49455, avg=32881.66, stdev=7883.60 00:09:17.708 lat (usec): min=15487, max=49919, avg=33128.63, stdev=7907.64 00:09:17.708 clat percentiles (usec): 00:09:17.708 | 1.00th=[16712], 5.00th=[18482], 10.00th=[21103], 20.00th=[25822], 00:09:17.708 | 30.00th=[29754], 40.00th=[31589], 50.00th=[33162], 60.00th=[34866], 00:09:17.708 | 70.00th=[38011], 80.00th=[40109], 90.00th=[42730], 95.00th=[45351], 00:09:17.708 | 99.00th=[47449], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:09:17.708 | 99.99th=[49546] 00:09:17.708 write: IOPS=2324, BW=9298KiB/s (9521kB/s)(9344KiB/1005msec); 0 zone resets 00:09:17.708 slat (usec): min=9, max=7344, avg=201.95, stdev=705.33 00:09:17.708 clat (usec): min=3552, max=58372, avg=25554.27, stdev=11451.86 00:09:17.708 lat (usec): min=5830, max=58422, avg=25756.22, stdev=11527.42 00:09:17.708 clat percentiles (usec): 00:09:17.708 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 8717], 20.00th=[11731], 00:09:17.708 | 30.00th=[21890], 40.00th=[23725], 50.00th=[25297], 60.00th=[27395], 00:09:17.708 | 70.00th=[30278], 80.00th=[36439], 90.00th=[40633], 95.00th=[45351], 00:09:17.708 | 99.00th=[52167], 99.50th=[54264], 99.90th=[54789], 99.95th=[58459], 00:09:17.708 | 99.99th=[58459] 00:09:17.708 bw ( KiB/s): min= 6746, max=10912, per=15.55%, avg=8829.00, stdev=2945.81, samples=2 00:09:17.708 iops : min= 1686, max= 2728, avg=2207.00, stdev=736.81, samples=2 00:09:17.708 lat (msec) : 4=0.02%, 10=10.08%, 20=7.12%, 50=81.57%, 100=1.21% 00:09:17.708 cpu : usr=3.29%, sys=8.67%, ctx=680, majf=0, minf=12 00:09:17.708 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:17.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.708 issued rwts: total=2048,2336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.708 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.708 job2: (groupid=0, jobs=1): err= 0: pid=66711: Mon Dec 9 10:52:10 2024 00:09:17.708 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:09:17.708 slat (usec): min=3, max=8204, avg=153.13, stdev=666.09 00:09:17.708 clat (usec): min=13083, max=33705, avg=19677.07, stdev=3372.48 00:09:17.708 lat (usec): min=13097, max=33730, avg=19830.20, stdev=3415.06 00:09:17.708 clat percentiles (usec): 00:09:17.708 | 1.00th=[13173], 5.00th=[15270], 10.00th=[15664], 20.00th=[16450], 00:09:17.708 | 30.00th=[17695], 40.00th=[18482], 50.00th=[19530], 60.00th=[20317], 00:09:17.708 | 70.00th=[21103], 80.00th=[22676], 90.00th=[24511], 95.00th=[25560], 00:09:17.708 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30016], 99.95th=[31851], 00:09:17.708 | 99.99th=[33817] 00:09:17.708 write: IOPS=3233, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1005msec); 0 zone resets 00:09:17.708 slat (usec): min=6, max=8101, avg=154.45, stdev=629.76 00:09:17.708 clat (usec): min=4367, max=41048, avg=20528.25, stdev=7758.08 00:09:17.708 lat (usec): min=6513, max=41142, avg=20682.70, stdev=7813.79 00:09:17.708 clat percentiles (usec): 00:09:17.709 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[13042], 20.00th=[14615], 00:09:17.709 | 30.00th=[15008], 40.00th=[15926], 50.00th=[16909], 60.00th=[18744], 00:09:17.709 | 70.00th=[26346], 80.00th=[28705], 90.00th=[32375], 95.00th=[34866], 00:09:17.709 | 99.00th=[38011], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:09:17.709 | 99.99th=[41157] 00:09:17.709 bw ( KiB/s): min=11249, max=13739, per=22.00%, avg=12494.00, stdev=1760.70, samples=2 00:09:17.709 iops : min= 2812, max= 3434, avg=3123.00, stdev=439.82, samples=2 00:09:17.709 lat (msec) : 10=0.70%, 20=59.11%, 50=40.19% 00:09:17.709 cpu : usr=2.79%, sys=10.96%, ctx=352, majf=0, minf=5 00:09:17.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:17.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.709 issued rwts: total=3072,3250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.709 job3: (groupid=0, jobs=1): err= 0: pid=66712: Mon Dec 9 10:52:10 2024 00:09:17.709 read: IOPS=1883, BW=7534KiB/s (7715kB/s)(7572KiB/1005msec) 00:09:17.709 slat (usec): min=6, max=13430, avg=313.17, stdev=1098.01 00:09:17.709 clat (usec): min=2490, max=68289, avg=38545.78, stdev=10606.55 00:09:17.709 lat (usec): min=7555, max=70347, avg=38858.95, stdev=10631.35 00:09:17.709 clat percentiles (usec): 00:09:17.709 | 1.00th=[ 9110], 5.00th=[24773], 10.00th=[26870], 20.00th=[30016], 00:09:17.709 | 30.00th=[31851], 40.00th=[35390], 50.00th=[39060], 60.00th=[40633], 00:09:17.709 | 70.00th=[43779], 80.00th=[47973], 90.00th=[50594], 95.00th=[57934], 00:09:17.709 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:09:17.709 | 99.99th=[68682] 00:09:17.709 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:09:17.709 slat (usec): min=13, max=12935, avg=189.53, stdev=766.34 00:09:17.709 clat (usec): min=11662, max=46443, avg=25795.44, stdev=7464.28 00:09:17.709 lat (usec): min=11696, max=46473, avg=25984.97, stdev=7496.70 00:09:17.709 clat percentiles (usec): 00:09:17.709 | 1.00th=[13042], 5.00th=[13960], 10.00th=[15926], 20.00th=[19006], 00:09:17.709 | 30.00th=[22152], 40.00th=[22938], 50.00th=[24511], 60.00th=[26870], 00:09:17.709 | 70.00th=[28181], 80.00th=[32375], 90.00th=[38011], 95.00th=[39584], 00:09:17.709 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[46400], 00:09:17.709 | 99.99th=[46400] 00:09:17.709 bw ( KiB/s): min= 8175, max= 8208, per=14.42%, avg=8191.50, stdev=23.33, samples=2 00:09:17.709 iops : min= 2043, max= 2052, avg=2047.50, stdev= 6.36, samples=2 00:09:17.709 lat (msec) : 4=0.03%, 10=0.81%, 20=12.66%, 50=80.99%, 100=5.51% 00:09:17.709 cpu : usr=1.79%, sys=8.96%, ctx=651, majf=0, minf=19 00:09:17.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:17.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.709 issued rwts: total=1893,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.709 00:09:17.709 Run status group 0 (all jobs): 00:09:17.709 READ: bw=51.1MiB/s (53.6MB/s), 7534KiB/s-24.0MiB/s (7715kB/s-25.1MB/s), io=51.4MiB (53.9MB), run=1001-1005msec 00:09:17.709 WRITE: bw=55.5MiB/s (58.2MB/s), 8151KiB/s-25.9MiB/s (8347kB/s-27.1MB/s), io=55.7MiB (58.4MB), run=1001-1005msec 00:09:17.709 00:09:17.709 Disk stats (read/write): 00:09:17.709 nvme0n1: ios=5386/5632, merge=0/0, ticks=11038/12087, in_queue=23125, util=89.47% 00:09:17.709 nvme0n2: ios=1865/2048, merge=0/0, ticks=14149/11733, in_queue=25882, util=89.01% 00:09:17.709 nvme0n3: ios=2594/2975, merge=0/0, ticks=19868/19549, in_queue=39417, util=90.00% 00:09:17.709 nvme0n4: ios=1553/1926, merge=0/0, ticks=15181/11549, in_queue=26730, util=89.24% 00:09:17.967 10:52:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:17.967 [global] 00:09:17.967 thread=1 00:09:17.967 invalidate=1 00:09:17.967 rw=randwrite 00:09:17.967 time_based=1 00:09:17.967 runtime=1 00:09:17.967 ioengine=libaio 00:09:17.967 direct=1 00:09:17.967 bs=4096 00:09:17.967 iodepth=128 00:09:17.967 norandommap=0 00:09:17.967 numjobs=1 00:09:17.967 00:09:17.967 verify_dump=1 00:09:17.967 verify_backlog=512 00:09:17.967 verify_state_save=0 00:09:17.967 do_verify=1 00:09:17.967 verify=crc32c-intel 00:09:17.967 [job0] 00:09:17.967 filename=/dev/nvme0n1 00:09:17.967 [job1] 00:09:17.967 filename=/dev/nvme0n2 00:09:17.967 [job2] 00:09:17.967 filename=/dev/nvme0n3 00:09:17.967 [job3] 00:09:17.967 filename=/dev/nvme0n4 00:09:17.967 Could not set queue depth (nvme0n1) 00:09:17.967 Could not set queue depth (nvme0n2) 00:09:17.967 Could not set queue depth (nvme0n3) 00:09:17.967 Could not set queue depth (nvme0n4) 00:09:17.967 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.967 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.967 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.967 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.967 fio-3.35 00:09:17.967 Starting 4 threads 00:09:19.342 00:09:19.342 job0: (groupid=0, jobs=1): err= 0: pid=66767: Mon Dec 9 10:52:12 2024 00:09:19.342 read: IOPS=2546, BW=9.95MiB/s (10.4MB/s)(9.98MiB/1003msec) 00:09:19.342 slat (usec): min=14, max=20232, avg=190.39, stdev=1130.78 00:09:19.342 clat (usec): min=2638, max=61616, avg=23718.19, stdev=8148.86 00:09:19.342 lat (usec): min=2656, max=61651, avg=23908.58, stdev=8235.54 00:09:19.342 clat percentiles (usec): 00:09:19.342 | 1.00th=[ 4752], 5.00th=[16581], 10.00th=[19006], 20.00th=[20579], 00:09:19.342 | 30.00th=[20841], 40.00th=[21365], 50.00th=[21627], 60.00th=[22152], 00:09:19.342 | 70.00th=[22676], 80.00th=[23725], 90.00th=[35390], 95.00th=[42206], 00:09:19.342 | 99.00th=[52691], 99.50th=[56361], 99.90th=[61604], 99.95th=[61604], 00:09:19.342 | 99.99th=[61604] 00:09:19.342 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:09:19.342 slat (usec): min=21, max=13270, avg=190.52, stdev=981.50 00:09:19.342 clat (usec): min=10530, max=70750, avg=25908.46, stdev=13731.78 00:09:19.342 lat (usec): min=10560, max=70779, avg=26098.98, stdev=13832.52 00:09:19.342 clat percentiles (usec): 00:09:19.342 | 1.00th=[10552], 5.00th=[12125], 10.00th=[14091], 20.00th=[18744], 00:09:19.342 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20317], 60.00th=[21103], 00:09:19.342 | 70.00th=[25297], 80.00th=[33424], 90.00th=[52167], 95.00th=[58459], 00:09:19.342 | 99.00th=[66847], 99.50th=[69731], 99.90th=[70779], 99.95th=[70779], 00:09:19.342 | 99.99th=[70779] 00:09:19.343 bw ( KiB/s): min= 8192, max=12312, per=15.54%, avg=10252.00, stdev=2913.28, samples=2 00:09:19.343 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:09:19.343 lat (msec) : 4=0.41%, 10=0.41%, 20=25.15%, 50=67.58%, 100=6.45% 00:09:19.343 cpu : usr=3.09%, sys=9.58%, ctx=192, majf=0, minf=11 00:09:19.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:19.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.343 issued rwts: total=2554,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.343 job1: (groupid=0, jobs=1): err= 0: pid=66768: Mon Dec 9 10:52:12 2024 00:09:19.343 read: IOPS=3437, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1004msec) 00:09:19.343 slat (usec): min=3, max=17627, avg=145.27, stdev=1019.75 00:09:19.343 clat (usec): min=1728, max=42987, avg=20260.79, stdev=5353.56 00:09:19.343 lat (usec): min=10106, max=56197, avg=20406.06, stdev=5427.39 00:09:19.343 clat percentiles (usec): 00:09:19.343 | 1.00th=[10814], 5.00th=[14353], 10.00th=[14877], 20.00th=[15401], 00:09:19.343 | 30.00th=[16188], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:09:19.343 | 70.00th=[21890], 80.00th=[22938], 90.00th=[23725], 95.00th=[25297], 00:09:19.343 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:19.343 | 99.99th=[42730] 00:09:19.343 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:19.343 slat (usec): min=7, max=16609, avg=130.73, stdev=822.62 00:09:19.343 clat (usec): min=5897, max=29587, avg=15969.47, stdev=4303.86 00:09:19.343 lat (usec): min=8701, max=29617, avg=16100.20, stdev=4269.83 00:09:19.343 clat percentiles (usec): 00:09:19.343 | 1.00th=[ 8979], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:09:19.343 | 30.00th=[12125], 40.00th=[13566], 50.00th=[15664], 60.00th=[18482], 00:09:19.343 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20841], 95.00th=[21103], 00:09:19.343 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:09:19.343 | 99.99th=[29492] 00:09:19.343 bw ( KiB/s): min=12312, max=16384, per=21.75%, avg=14348.00, stdev=2879.34, samples=2 00:09:19.343 iops : min= 3078, max= 4096, avg=3587.00, stdev=719.83, samples=2 00:09:19.343 lat (msec) : 2=0.01%, 10=0.64%, 20=59.83%, 50=39.52% 00:09:19.343 cpu : usr=3.49%, sys=11.96%, ctx=166, majf=0, minf=15 00:09:19.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:19.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.343 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.343 job2: (groupid=0, jobs=1): err= 0: pid=66769: Mon Dec 9 10:52:12 2024 00:09:19.343 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:19.343 slat (usec): min=7, max=5910, avg=90.69, stdev=527.53 00:09:19.343 clat (usec): min=7836, max=20860, avg=12833.04, stdev=1388.02 00:09:19.343 lat (usec): min=7856, max=24411, avg=12923.73, stdev=1413.24 00:09:19.343 clat percentiles (usec): 00:09:19.343 | 1.00th=[ 8356], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:09:19.343 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:09:19.343 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14091], 00:09:19.343 | 99.00th=[19530], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:09:19.343 | 99.99th=[20841] 00:09:19.343 write: IOPS=5295, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1003msec); 0 zone resets 00:09:19.343 slat (usec): min=21, max=6490, avg=91.45, stdev=454.63 00:09:19.343 clat (usec): min=495, max=15745, avg=11539.80, stdev=1231.17 00:09:19.343 lat (usec): min=3958, max=16050, avg=11631.25, stdev=1163.39 00:09:19.343 clat percentiles (usec): 00:09:19.343 | 1.00th=[ 5800], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:09:19.343 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:09:19.343 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:09:19.343 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15664], 99.95th=[15795], 00:09:19.343 | 99.99th=[15795] 00:09:19.343 bw ( KiB/s): min=20480, max=21048, per=31.48%, avg=20764.00, stdev=401.64, samples=2 00:09:19.343 iops : min= 5120, max= 5262, avg=5191.00, stdev=100.41, samples=2 00:09:19.343 lat (usec) : 500=0.01% 00:09:19.343 lat (msec) : 4=0.02%, 10=4.18%, 20=95.48%, 50=0.31% 00:09:19.343 cpu : usr=4.69%, sys=20.76%, ctx=233, majf=0, minf=7 00:09:19.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:19.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.343 issued rwts: total=5120,5311,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.343 job3: (groupid=0, jobs=1): err= 0: pid=66770: Mon Dec 9 10:52:12 2024 00:09:19.343 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:09:19.343 slat (usec): min=3, max=5908, avg=91.92, stdev=461.09 00:09:19.343 clat (usec): min=7493, max=20893, avg=12776.24, stdev=1257.70 00:09:19.343 lat (usec): min=7504, max=24134, avg=12868.16, stdev=1231.18 00:09:19.343 clat percentiles (usec): 00:09:19.343 | 1.00th=[ 8848], 5.00th=[11731], 10.00th=[12125], 20.00th=[12256], 00:09:19.343 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12649], 60.00th=[12780], 00:09:19.343 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[13829], 00:09:19.343 | 99.00th=[19006], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:09:19.343 | 99.99th=[20841] 00:09:19.343 write: IOPS=5138, BW=20.1MiB/s (21.0MB/s)(20.3MiB/1009msec); 0 zone resets 00:09:19.343 slat (usec): min=2, max=10557, avg=94.18, stdev=473.50 00:09:19.343 clat (usec): min=209, max=20418, avg=12060.21, stdev=1581.23 00:09:19.343 lat (usec): min=6544, max=20449, avg=12154.39, stdev=1537.87 00:09:19.343 clat percentiles (usec): 00:09:19.343 | 1.00th=[ 6718], 5.00th=[10290], 10.00th=[10552], 20.00th=[11338], 00:09:19.343 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:09:19.343 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[15008], 00:09:19.343 | 99.00th=[19268], 99.50th=[20055], 99.90th=[20317], 99.95th=[20317], 00:09:19.343 | 99.99th=[20317] 00:09:19.343 bw ( KiB/s): min=20480, max=20480, per=31.05%, avg=20480.00, stdev= 0.00, samples=2 00:09:19.343 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:19.343 lat (usec) : 250=0.01% 00:09:19.343 lat (msec) : 10=2.38%, 20=96.94%, 50=0.67% 00:09:19.343 cpu : usr=5.75%, sys=17.76%, ctx=348, majf=0, minf=13 00:09:19.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:19.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:19.343 issued rwts: total=5120,5185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:19.343 00:09:19.343 Run status group 0 (all jobs): 00:09:19.343 READ: bw=62.9MiB/s (65.9MB/s), 9.95MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=63.5MiB (66.5MB), run=1003-1009msec 00:09:19.343 WRITE: bw=64.4MiB/s (67.5MB/s), 9.97MiB/s-20.7MiB/s (10.5MB/s-21.7MB/s), io=65.0MiB (68.2MB), run=1003-1009msec 00:09:19.343 00:09:19.343 Disk stats (read/write): 00:09:19.343 nvme0n1: ios=2098/2211, merge=0/0, ticks=23459/27237, in_queue=50696, util=88.16% 00:09:19.343 nvme0n2: ios=2987/3072, merge=0/0, ticks=54846/46306, in_queue=101152, util=88.61% 00:09:19.343 nvme0n3: ios=4313/4608, merge=0/0, ticks=51453/48071, in_queue=99524, util=89.41% 00:09:19.343 nvme0n4: ios=4256/4608, merge=0/0, ticks=30525/31086, in_queue=61611, util=89.33% 00:09:19.343 10:52:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:19.343 10:52:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66787 00:09:19.343 10:52:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:19.343 10:52:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:19.343 [global] 00:09:19.343 thread=1 00:09:19.343 invalidate=1 00:09:19.343 rw=read 00:09:19.343 time_based=1 00:09:19.343 runtime=10 00:09:19.343 ioengine=libaio 00:09:19.343 direct=1 00:09:19.343 bs=4096 00:09:19.343 iodepth=1 00:09:19.343 norandommap=1 00:09:19.343 numjobs=1 00:09:19.343 00:09:19.343 [job0] 00:09:19.343 filename=/dev/nvme0n1 00:09:19.343 [job1] 00:09:19.343 filename=/dev/nvme0n2 00:09:19.343 [job2] 00:09:19.343 filename=/dev/nvme0n3 00:09:19.343 [job3] 00:09:19.343 filename=/dev/nvme0n4 00:09:19.343 Could not set queue depth (nvme0n1) 00:09:19.343 Could not set queue depth (nvme0n2) 00:09:19.343 Could not set queue depth (nvme0n3) 00:09:19.343 Could not set queue depth (nvme0n4) 00:09:19.601 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.601 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.601 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.601 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.601 fio-3.35 00:09:19.601 Starting 4 threads 00:09:22.907 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:22.907 fio: pid=66831, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.907 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34373632, buflen=4096 00:09:22.907 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:22.907 fio: pid=66830, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.908 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43999232, buflen=4096 00:09:22.908 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.908 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:22.908 fio: pid=66828, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:22.908 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=50561024, buflen=4096 00:09:22.908 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.908 10:52:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:23.165 fio: pid=66829, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:23.165 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=53088256, buflen=4096 00:09:23.165 00:09:23.165 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66828: Mon Dec 9 10:52:16 2024 00:09:23.165 read: IOPS=3806, BW=14.9MiB/s (15.6MB/s)(48.2MiB/3243msec) 00:09:23.165 slat (usec): min=4, max=10329, avg=10.59, stdev=165.84 00:09:23.165 clat (usec): min=86, max=7926, avg=251.50, stdev=90.98 00:09:23.165 lat (usec): min=92, max=10487, avg=262.09, stdev=190.60 00:09:23.165 clat percentiles (usec): 00:09:23.165 | 1.00th=[ 122], 5.00th=[ 151], 10.00th=[ 169], 20.00th=[ 204], 00:09:23.165 | 30.00th=[ 223], 40.00th=[ 243], 50.00th=[ 260], 60.00th=[ 269], 00:09:23.165 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 334], 00:09:23.165 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 545], 99.95th=[ 840], 00:09:23.165 | 99.99th=[ 1713] 00:09:23.165 bw ( KiB/s): min=13120, max=16902, per=29.13%, avg=14853.83, stdev=1586.84, samples=6 00:09:23.165 iops : min= 3280, max= 4225, avg=3713.33, stdev=396.54, samples=6 00:09:23.165 lat (usec) : 100=0.40%, 250=43.33%, 500=56.11%, 750=0.09%, 1000=0.04% 00:09:23.165 lat (msec) : 2=0.02%, 10=0.01% 00:09:23.165 cpu : usr=0.40%, sys=2.84%, ctx=12356, majf=0, minf=1 00:09:23.165 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.165 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.165 issued rwts: total=12345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.165 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.165 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66829: Mon Dec 9 10:52:16 2024 00:09:23.165 read: IOPS=3718, BW=14.5MiB/s (15.2MB/s)(50.6MiB/3486msec) 00:09:23.165 slat (usec): min=4, max=8520, avg=13.86, stdev=152.13 00:09:23.165 clat (usec): min=82, max=20760, avg=254.11, stdev=240.73 00:09:23.165 lat (usec): min=88, max=20777, avg=267.98, stdev=285.73 00:09:23.165 clat percentiles (usec): 00:09:23.165 | 1.00th=[ 91], 5.00th=[ 99], 10.00th=[ 109], 20.00th=[ 155], 00:09:23.165 | 30.00th=[ 227], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:09:23.165 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 347], 95.00th=[ 375], 00:09:23.165 | 99.00th=[ 429], 99.50th=[ 453], 99.90th=[ 1762], 99.95th=[ 3982], 00:09:23.165 | 99.99th=[ 7963] 00:09:23.165 bw ( KiB/s): min=11032, max=16140, per=25.84%, avg=13174.00, stdev=1821.22, samples=6 00:09:23.165 iops : min= 2758, max= 4035, avg=3293.50, stdev=455.30, samples=6 00:09:23.165 lat (usec) : 100=5.50%, 250=29.42%, 500=64.79%, 750=0.13%, 1000=0.04% 00:09:23.165 lat (msec) : 2=0.03%, 4=0.05%, 10=0.02%, 50=0.01% 00:09:23.166 cpu : usr=0.69%, sys=3.99%, ctx=12974, majf=0, minf=2 00:09:23.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.166 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.166 issued rwts: total=12962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.166 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66830: Mon Dec 9 10:52:16 2024 00:09:23.166 read: IOPS=3522, BW=13.8MiB/s (14.4MB/s)(42.0MiB/3050msec) 00:09:23.166 slat (usec): min=5, max=6853, avg=19.21, stdev=91.88 00:09:23.166 clat (usec): min=109, max=3627, avg=263.12, stdev=85.61 00:09:23.166 lat (usec): min=119, max=7015, avg=282.33, stdev=127.87 00:09:23.166 clat percentiles (usec): 00:09:23.166 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 155], 20.00th=[ 208], 00:09:23.166 | 30.00th=[ 229], 40.00th=[ 245], 50.00th=[ 269], 60.00th=[ 285], 00:09:23.166 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 367], 00:09:23.166 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[ 537], 99.95th=[ 1106], 00:09:23.166 | 99.99th=[ 3589] 00:09:23.166 bw ( KiB/s): min=11952, max=16942, per=26.67%, avg=13600.00, stdev=2200.60, samples=5 00:09:23.166 iops : min= 2988, max= 4235, avg=3399.80, stdev=549.89, samples=5 00:09:23.166 lat (usec) : 250=41.92%, 500=57.96%, 750=0.03%, 1000=0.02% 00:09:23.166 lat (msec) : 2=0.03%, 4=0.03% 00:09:23.166 cpu : usr=1.25%, sys=5.58%, ctx=10758, majf=0, minf=1 00:09:23.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.166 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.166 issued rwts: total=10743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.166 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66831: Mon Dec 9 10:52:16 2024 00:09:23.166 read: IOPS=2944, BW=11.5MiB/s (12.1MB/s)(32.8MiB/2850msec) 00:09:23.166 slat (usec): min=13, max=150, avg=26.35, stdev= 6.83 00:09:23.166 clat (usec): min=143, max=5037, avg=310.58, stdev=80.48 00:09:23.166 lat (usec): min=168, max=5063, avg=336.93, stdev=81.32 00:09:23.166 clat percentiles (usec): 00:09:23.166 | 1.00th=[ 196], 5.00th=[ 231], 10.00th=[ 247], 20.00th=[ 265], 00:09:23.166 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:09:23.166 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 375], 95.00th=[ 396], 00:09:23.166 | 99.00th=[ 445], 99.50th=[ 469], 99.90th=[ 676], 99.95th=[ 1434], 00:09:23.166 | 99.99th=[ 5014] 00:09:23.166 bw ( KiB/s): min=10896, max=12360, per=23.21%, avg=11835.20, stdev=567.37, samples=5 00:09:23.166 iops : min= 2724, max= 3090, avg=2958.80, stdev=141.84, samples=5 00:09:23.166 lat (usec) : 250=11.56%, 500=88.12%, 750=0.21%, 1000=0.02% 00:09:23.166 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:09:23.166 cpu : usr=1.47%, sys=6.91%, ctx=8399, majf=0, minf=2 00:09:23.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.166 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.166 issued rwts: total=8393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.166 00:09:23.166 Run status group 0 (all jobs): 00:09:23.166 READ: bw=49.8MiB/s (52.2MB/s), 11.5MiB/s-14.9MiB/s (12.1MB/s-15.6MB/s), io=174MiB (182MB), run=2850-3486msec 00:09:23.166 00:09:23.166 Disk stats (read/write): 00:09:23.166 nvme0n1: ios=11663/0, merge=0/0, ticks=2946/0, in_queue=2946, util=95.56% 00:09:23.166 nvme0n2: ios=12063/0, merge=0/0, ticks=3158/0, in_queue=3158, util=95.59% 00:09:23.166 nvme0n3: ios=9743/0, merge=0/0, ticks=2726/0, in_queue=2726, util=96.75% 00:09:23.166 nvme0n4: ios=7788/0, merge=0/0, ticks=2474/0, in_queue=2474, util=96.52% 00:09:23.166 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.166 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:23.423 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.423 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:23.679 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.679 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:23.936 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.936 10:52:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:23.936 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:23.936 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66787 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:24.193 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.451 nvmf hotplug test: fio failed as expected 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.451 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.451 rmmod nvme_tcp 00:09:24.451 rmmod nvme_fabrics 00:09:24.451 rmmod nvme_keyring 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66408 ']' 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66408 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66408 ']' 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66408 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66408 00:09:24.710 killing process with pid 66408 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66408' 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66408 00:09:24.710 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66408 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:24.970 10:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.970 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:25.229 00:09:25.229 real 0m18.804s 00:09:25.229 user 1m10.403s 00:09:25.229 sys 0m8.806s 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.229 ************************************ 00:09:25.229 END TEST nvmf_fio_target 00:09:25.229 ************************************ 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.229 ************************************ 00:09:25.229 START TEST nvmf_bdevio 00:09:25.229 ************************************ 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:25.229 * Looking for test storage... 00:09:25.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.229 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:25.488 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.489 --rc genhtml_branch_coverage=1 00:09:25.489 --rc genhtml_function_coverage=1 00:09:25.489 --rc genhtml_legend=1 00:09:25.489 --rc geninfo_all_blocks=1 00:09:25.489 --rc geninfo_unexecuted_blocks=1 00:09:25.489 00:09:25.489 ' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.489 --rc genhtml_branch_coverage=1 00:09:25.489 --rc genhtml_function_coverage=1 00:09:25.489 --rc genhtml_legend=1 00:09:25.489 --rc geninfo_all_blocks=1 00:09:25.489 --rc geninfo_unexecuted_blocks=1 00:09:25.489 00:09:25.489 ' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.489 --rc genhtml_branch_coverage=1 00:09:25.489 --rc genhtml_function_coverage=1 00:09:25.489 --rc genhtml_legend=1 00:09:25.489 --rc geninfo_all_blocks=1 00:09:25.489 --rc geninfo_unexecuted_blocks=1 00:09:25.489 00:09:25.489 ' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.489 --rc genhtml_branch_coverage=1 00:09:25.489 --rc genhtml_function_coverage=1 00:09:25.489 --rc genhtml_legend=1 00:09:25.489 --rc geninfo_all_blocks=1 00:09:25.489 --rc geninfo_unexecuted_blocks=1 00:09:25.489 00:09:25.489 ' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:25.489 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:25.490 Cannot find device "nvmf_init_br" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:25.490 Cannot find device "nvmf_init_br2" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:25.490 Cannot find device "nvmf_tgt_br" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:25.490 Cannot find device "nvmf_tgt_br2" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:25.490 Cannot find device "nvmf_init_br" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:25.490 Cannot find device "nvmf_init_br2" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:25.490 Cannot find device "nvmf_tgt_br" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:25.490 Cannot find device "nvmf_tgt_br2" 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:25.490 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:25.749 Cannot find device "nvmf_br" 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:25.749 Cannot find device "nvmf_init_if" 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:25.749 Cannot find device "nvmf_init_if2" 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:25.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:25.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:25.749 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:25.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:25.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:25.750 00:09:25.750 --- 10.0.0.3 ping statistics --- 00:09:25.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.750 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:25.750 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:25.750 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:09:25.750 00:09:25.750 --- 10.0.0.4 ping statistics --- 00:09:25.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.750 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:25.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:25.750 00:09:25.750 --- 10.0.0.1 ping statistics --- 00:09:25.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.750 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:25.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:09:25.750 00:09:25.750 --- 10.0.0.2 ping statistics --- 00:09:25.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.750 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:25.750 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67150 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67150 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67150 ']' 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.009 10:52:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.009 [2024-12-09 10:52:18.985289] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:09:26.009 [2024-12-09 10:52:18.985343] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.009 [2024-12-09 10:52:19.136841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.268 [2024-12-09 10:52:19.189304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.268 [2024-12-09 10:52:19.189354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.268 [2024-12-09 10:52:19.189377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.268 [2024-12-09 10:52:19.189382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.268 [2024-12-09 10:52:19.189386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.268 [2024-12-09 10:52:19.190319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:26.268 [2024-12-09 10:52:19.190520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:26.268 [2024-12-09 10:52:19.190714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.268 [2024-12-09 10:52:19.190720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:26.268 [2024-12-09 10:52:19.231855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.834 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.834 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:26.834 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.834 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.834 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 [2024-12-09 10:52:19.911305] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 Malloc0 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 [2024-12-09 10:52:19.987255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.835 { 00:09:26.835 "params": { 00:09:26.835 "name": "Nvme$subsystem", 00:09:26.835 "trtype": "$TEST_TRANSPORT", 00:09:26.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.835 "adrfam": "ipv4", 00:09:26.835 "trsvcid": "$NVMF_PORT", 00:09:26.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.835 "hdgst": ${hdgst:-false}, 00:09:26.835 "ddgst": ${ddgst:-false} 00:09:26.835 }, 00:09:26.835 "method": "bdev_nvme_attach_controller" 00:09:26.835 } 00:09:26.835 EOF 00:09:26.835 )") 00:09:26.835 10:52:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:26.835 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:26.835 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:26.835 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.835 "params": { 00:09:26.835 "name": "Nvme1", 00:09:26.835 "trtype": "tcp", 00:09:26.835 "traddr": "10.0.0.3", 00:09:26.835 "adrfam": "ipv4", 00:09:26.835 "trsvcid": "4420", 00:09:26.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.835 "hdgst": false, 00:09:26.835 "ddgst": false 00:09:26.835 }, 00:09:26.835 "method": "bdev_nvme_attach_controller" 00:09:26.835 }' 00:09:27.093 [2024-12-09 10:52:20.044276] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:09:27.093 [2024-12-09 10:52:20.044335] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67186 ] 00:09:27.093 [2024-12-09 10:52:20.185225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.093 [2024-12-09 10:52:20.236572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.093 [2024-12-09 10:52:20.236712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.093 [2024-12-09 10:52:20.236716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.351 [2024-12-09 10:52:20.286783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.351 I/O targets: 00:09:27.351 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:27.351 00:09:27.351 00:09:27.351 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.351 http://cunit.sourceforge.net/ 00:09:27.351 00:09:27.351 00:09:27.351 Suite: bdevio tests on: Nvme1n1 00:09:27.351 Test: blockdev write read block ...passed 00:09:27.351 Test: blockdev write zeroes read block ...passed 00:09:27.351 Test: blockdev write zeroes read no split ...passed 00:09:27.351 Test: blockdev write zeroes read split ...passed 00:09:27.351 Test: blockdev write zeroes read split partial ...passed 00:09:27.351 Test: blockdev reset ...[2024-12-09 10:52:20.425864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:27.351 [2024-12-09 10:52:20.425968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaafb80 (9): Bad file descriptor 00:09:27.351 [2024-12-09 10:52:20.445266] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:27.351 passed 00:09:27.351 Test: blockdev write read 8 blocks ...passed 00:09:27.351 Test: blockdev write read size > 128k ...passed 00:09:27.351 Test: blockdev write read invalid size ...passed 00:09:27.351 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:27.351 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:27.351 Test: blockdev write read max offset ...passed 00:09:27.351 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:27.351 Test: blockdev writev readv 8 blocks ...passed 00:09:27.351 Test: blockdev writev readv 30 x 1block ...passed 00:09:27.351 Test: blockdev writev readv block ...passed 00:09:27.351 Test: blockdev writev readv size > 128k ...passed 00:09:27.351 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:27.351 Test: blockdev comparev and writev ...[2024-12-09 10:52:20.451893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.451930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.451945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.451953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.452290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.452310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.452323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.452330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.452631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.452657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.452670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.452677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.453045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.453067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.453080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:27.351 [2024-12-09 10:52:20.453088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:27.351 passed 00:09:27.351 Test: blockdev nvme passthru rw ...passed 00:09:27.351 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:52:20.453954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.351 [2024-12-09 10:52:20.453987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.454086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.351 [2024-12-09 10:52:20.454103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.454210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.351 [2024-12-09 10:52:20.454225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:27.351 [2024-12-09 10:52:20.454338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:27.351 [2024-12-09 10:52:20.454359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:27.351 passed 00:09:27.351 Test: blockdev nvme admin passthru ...passed 00:09:27.351 Test: blockdev copy ...passed 00:09:27.351 00:09:27.351 Run Summary: Type Total Ran Passed Failed Inactive 00:09:27.351 suites 1 1 n/a 0 0 00:09:27.351 tests 23 23 23 0 0 00:09:27.351 asserts 152 152 152 0 n/a 00:09:27.351 00:09:27.351 Elapsed time = 0.143 seconds 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:27.610 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:27.610 rmmod nvme_tcp 00:09:27.611 rmmod nvme_fabrics 00:09:27.611 rmmod nvme_keyring 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67150 ']' 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67150 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67150 ']' 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67150 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:27.611 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.870 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67150 00:09:27.870 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:27.870 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:27.870 killing process with pid 67150 00:09:27.870 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67150' 00:09:27.870 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67150 00:09:27.870 10:52:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67150 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.129 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:28.388 00:09:28.388 real 0m3.092s 00:09:28.388 user 0m8.754s 00:09:28.388 sys 0m0.900s 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.388 ************************************ 00:09:28.388 END TEST nvmf_bdevio 00:09:28.388 ************************************ 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.388 00:09:28.388 real 2m35.110s 00:09:28.388 user 6m47.137s 00:09:28.388 sys 0m48.991s 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.388 ************************************ 00:09:28.388 END TEST nvmf_target_core 00:09:28.388 ************************************ 00:09:28.388 10:52:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.388 10:52:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.388 10:52:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.388 10:52:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.388 ************************************ 00:09:28.388 START TEST nvmf_target_extra 00:09:28.388 ************************************ 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.388 * Looking for test storage... 00:09:28.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.388 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.659 --rc genhtml_branch_coverage=1 00:09:28.659 --rc genhtml_function_coverage=1 00:09:28.659 --rc genhtml_legend=1 00:09:28.659 --rc geninfo_all_blocks=1 00:09:28.659 --rc geninfo_unexecuted_blocks=1 00:09:28.659 00:09:28.659 ' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.659 --rc genhtml_branch_coverage=1 00:09:28.659 --rc genhtml_function_coverage=1 00:09:28.659 --rc genhtml_legend=1 00:09:28.659 --rc geninfo_all_blocks=1 00:09:28.659 --rc geninfo_unexecuted_blocks=1 00:09:28.659 00:09:28.659 ' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.659 --rc genhtml_branch_coverage=1 00:09:28.659 --rc genhtml_function_coverage=1 00:09:28.659 --rc genhtml_legend=1 00:09:28.659 --rc geninfo_all_blocks=1 00:09:28.659 --rc geninfo_unexecuted_blocks=1 00:09:28.659 00:09:28.659 ' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.659 --rc genhtml_branch_coverage=1 00:09:28.659 --rc genhtml_function_coverage=1 00:09:28.659 --rc genhtml_legend=1 00:09:28.659 --rc geninfo_all_blocks=1 00:09:28.659 --rc geninfo_unexecuted_blocks=1 00:09:28.659 00:09:28.659 ' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.659 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:28.659 ************************************ 00:09:28.659 START TEST nvmf_auth_target 00:09:28.659 ************************************ 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:28.659 * Looking for test storage... 00:09:28.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.659 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.936 --rc genhtml_branch_coverage=1 00:09:28.936 --rc genhtml_function_coverage=1 00:09:28.936 --rc genhtml_legend=1 00:09:28.936 --rc geninfo_all_blocks=1 00:09:28.936 --rc geninfo_unexecuted_blocks=1 00:09:28.936 00:09:28.936 ' 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.936 --rc genhtml_branch_coverage=1 00:09:28.936 --rc genhtml_function_coverage=1 00:09:28.936 --rc genhtml_legend=1 00:09:28.936 --rc geninfo_all_blocks=1 00:09:28.936 --rc geninfo_unexecuted_blocks=1 00:09:28.936 00:09:28.936 ' 00:09:28.936 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.936 --rc genhtml_branch_coverage=1 00:09:28.936 --rc genhtml_function_coverage=1 00:09:28.936 --rc genhtml_legend=1 00:09:28.936 --rc geninfo_all_blocks=1 00:09:28.936 --rc geninfo_unexecuted_blocks=1 00:09:28.936 00:09:28.936 ' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.937 --rc genhtml_branch_coverage=1 00:09:28.937 --rc genhtml_function_coverage=1 00:09:28.937 --rc genhtml_legend=1 00:09:28.937 --rc geninfo_all_blocks=1 00:09:28.937 --rc geninfo_unexecuted_blocks=1 00:09:28.937 00:09:28.937 ' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.937 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:28.937 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:28.938 Cannot find device "nvmf_init_br" 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:28.938 Cannot find device "nvmf_init_br2" 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:28.938 Cannot find device "nvmf_tgt_br" 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.938 Cannot find device "nvmf_tgt_br2" 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:28.938 10:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:28.938 Cannot find device "nvmf_init_br" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:28.938 Cannot find device "nvmf_init_br2" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:28.938 Cannot find device "nvmf_tgt_br" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:28.938 Cannot find device "nvmf_tgt_br2" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:28.938 Cannot find device "nvmf_br" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:28.938 Cannot find device "nvmf_init_if" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:28.938 Cannot find device "nvmf_init_if2" 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:28.938 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.199 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:29.199 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.199 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:29.199 00:09:29.199 --- 10.0.0.3 ping statistics --- 00:09:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.199 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:29.199 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:29.199 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:09:29.199 00:09:29.199 --- 10.0.0.4 ping statistics --- 00:09:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.199 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:09:29.199 00:09:29.199 --- 10.0.0.1 ping statistics --- 00:09:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.199 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:29.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:29.199 00:09:29.199 --- 10.0.0.2 ping statistics --- 00:09:29.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.199 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.199 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67466 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67466 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67466 ']' 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:29.200 10:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:30.137 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.137 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:30.137 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.137 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.137 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67502 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dd5ef0156100ab142e00e5b9279410201c0938fd064394d0 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.bpe 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dd5ef0156100ab142e00e5b9279410201c0938fd064394d0 0 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dd5ef0156100ab142e00e5b9279410201c0938fd064394d0 0 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dd5ef0156100ab142e00e5b9279410201c0938fd064394d0 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.bpe 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.bpe 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.bpe 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ff21c3a26e092ffcd0ab0f99c976e98756f09d53bc0b0c0aa6244c2b9316961c 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.28L 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ff21c3a26e092ffcd0ab0f99c976e98756f09d53bc0b0c0aa6244c2b9316961c 3 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ff21c3a26e092ffcd0ab0f99c976e98756f09d53bc0b0c0aa6244c2b9316961c 3 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ff21c3a26e092ffcd0ab0f99c976e98756f09d53bc0b0c0aa6244c2b9316961c 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.28L 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.28L 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.28L 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=218e2283da645be86833623de7039d85 00:09:30.397 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.B4a 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 218e2283da645be86833623de7039d85 1 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 218e2283da645be86833623de7039d85 1 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=218e2283da645be86833623de7039d85 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.B4a 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.B4a 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.B4a 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9aad4649cb8bdff7667d6467a5f6b26148e4c7117b10d981 00:09:30.398 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.w45 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9aad4649cb8bdff7667d6467a5f6b26148e4c7117b10d981 2 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9aad4649cb8bdff7667d6467a5f6b26148e4c7117b10d981 2 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9aad4649cb8bdff7667d6467a5f6b26148e4c7117b10d981 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.w45 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.w45 00:09:30.657 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.w45 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4cb77e85d865020329d57df2c3512b1209aa7a4de9cdab78 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.DBA 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4cb77e85d865020329d57df2c3512b1209aa7a4de9cdab78 2 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4cb77e85d865020329d57df2c3512b1209aa7a4de9cdab78 2 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4cb77e85d865020329d57df2c3512b1209aa7a4de9cdab78 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.DBA 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.DBA 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.DBA 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4d91a55aadf1cf783ef3bbb531815391 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.KYW 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4d91a55aadf1cf783ef3bbb531815391 1 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4d91a55aadf1cf783ef3bbb531815391 1 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4d91a55aadf1cf783ef3bbb531815391 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.KYW 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.KYW 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.KYW 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e85b15356d8a0c17b34fd10516917908d0ae8776d170a6a47521106d1a685d2 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.hBK 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e85b15356d8a0c17b34fd10516917908d0ae8776d170a6a47521106d1a685d2 3 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e85b15356d8a0c17b34fd10516917908d0ae8776d170a6a47521106d1a685d2 3 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e85b15356d8a0c17b34fd10516917908d0ae8776d170a6a47521106d1a685d2 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:30.658 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.hBK 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.hBK 00:09:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.hBK 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67466 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67466 ']' 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.918 10:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67502 /var/tmp/host.sock 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67502 ']' 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.918 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bpe 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.bpe 00:09:31.177 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.bpe 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.28L ]] 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28L 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28L 00:09:31.436 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28L 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.B4a 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.B4a 00:09:31.695 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.B4a 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.w45 ]] 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w45 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w45 00:09:31.954 10:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w45 00:09:31.954 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:31.954 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DBA 00:09:31.954 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.954 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.DBA 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.DBA 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.KYW ]] 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KYW 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KYW 00:09:32.212 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KYW 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hBK 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.hBK 00:09:32.471 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.hBK 00:09:32.729 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:32.729 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:32.729 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:32.729 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.729 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:32.729 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:32.987 10:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:33.244 00:09:33.244 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:33.245 { 00:09:33.245 "cntlid": 1, 00:09:33.245 "qid": 0, 00:09:33.245 "state": "enabled", 00:09:33.245 "thread": "nvmf_tgt_poll_group_000", 00:09:33.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:33.245 "listen_address": { 00:09:33.245 "trtype": "TCP", 00:09:33.245 "adrfam": "IPv4", 00:09:33.245 "traddr": "10.0.0.3", 00:09:33.245 "trsvcid": "4420" 00:09:33.245 }, 00:09:33.245 "peer_address": { 00:09:33.245 "trtype": "TCP", 00:09:33.245 "adrfam": "IPv4", 00:09:33.245 "traddr": "10.0.0.1", 00:09:33.245 "trsvcid": "48790" 00:09:33.245 }, 00:09:33.245 "auth": { 00:09:33.245 "state": "completed", 00:09:33.245 "digest": "sha256", 00:09:33.245 "dhgroup": "null" 00:09:33.245 } 00:09:33.245 } 00:09:33.245 ]' 00:09:33.245 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.503 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:33.760 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:33.761 10:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:37.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:37.041 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:37.042 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:37.300 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:37.563 00:09:37.563 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.563 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:37.563 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:37.829 { 00:09:37.829 "cntlid": 3, 00:09:37.829 "qid": 0, 00:09:37.829 "state": "enabled", 00:09:37.829 "thread": "nvmf_tgt_poll_group_000", 00:09:37.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:37.829 "listen_address": { 00:09:37.829 "trtype": "TCP", 00:09:37.829 "adrfam": "IPv4", 00:09:37.829 "traddr": "10.0.0.3", 00:09:37.829 "trsvcid": "4420" 00:09:37.829 }, 00:09:37.829 "peer_address": { 00:09:37.829 "trtype": "TCP", 00:09:37.829 "adrfam": "IPv4", 00:09:37.829 "traddr": "10.0.0.1", 00:09:37.829 "trsvcid": "43434" 00:09:37.829 }, 00:09:37.829 "auth": { 00:09:37.829 "state": "completed", 00:09:37.829 "digest": "sha256", 00:09:37.829 "dhgroup": "null" 00:09:37.829 } 00:09:37.829 } 00:09:37.829 ]' 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:37.829 10:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:38.091 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:38.091 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:38.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.655 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:38.912 10:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:39.169 00:09:39.169 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:39.169 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:39.169 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.427 { 00:09:39.427 "cntlid": 5, 00:09:39.427 "qid": 0, 00:09:39.427 "state": "enabled", 00:09:39.427 "thread": "nvmf_tgt_poll_group_000", 00:09:39.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:39.427 "listen_address": { 00:09:39.427 "trtype": "TCP", 00:09:39.427 "adrfam": "IPv4", 00:09:39.427 "traddr": "10.0.0.3", 00:09:39.427 "trsvcid": "4420" 00:09:39.427 }, 00:09:39.427 "peer_address": { 00:09:39.427 "trtype": "TCP", 00:09:39.427 "adrfam": "IPv4", 00:09:39.427 "traddr": "10.0.0.1", 00:09:39.427 "trsvcid": "43474" 00:09:39.427 }, 00:09:39.427 "auth": { 00:09:39.427 "state": "completed", 00:09:39.427 "digest": "sha256", 00:09:39.427 "dhgroup": "null" 00:09:39.427 } 00:09:39.427 } 00:09:39.427 ]' 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.427 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:39.685 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:39.685 10:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:40.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:40.251 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:40.509 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:40.767 00:09:40.767 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:40.767 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:40.767 10:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.025 { 00:09:41.025 "cntlid": 7, 00:09:41.025 "qid": 0, 00:09:41.025 "state": "enabled", 00:09:41.025 "thread": "nvmf_tgt_poll_group_000", 00:09:41.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:41.025 "listen_address": { 00:09:41.025 "trtype": "TCP", 00:09:41.025 "adrfam": "IPv4", 00:09:41.025 "traddr": "10.0.0.3", 00:09:41.025 "trsvcid": "4420" 00:09:41.025 }, 00:09:41.025 "peer_address": { 00:09:41.025 "trtype": "TCP", 00:09:41.025 "adrfam": "IPv4", 00:09:41.025 "traddr": "10.0.0.1", 00:09:41.025 "trsvcid": "43494" 00:09:41.025 }, 00:09:41.025 "auth": { 00:09:41.025 "state": "completed", 00:09:41.025 "digest": "sha256", 00:09:41.025 "dhgroup": "null" 00:09:41.025 } 00:09:41.025 } 00:09:41.025 ]' 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:41.025 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.283 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:41.283 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.283 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.283 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.283 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:41.541 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:09:41.541 10:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.107 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.365 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.365 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.365 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.365 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:42.623 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:42.623 { 00:09:42.623 "cntlid": 9, 00:09:42.623 "qid": 0, 00:09:42.623 "state": "enabled", 00:09:42.623 "thread": "nvmf_tgt_poll_group_000", 00:09:42.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:42.623 "listen_address": { 00:09:42.623 "trtype": "TCP", 00:09:42.623 "adrfam": "IPv4", 00:09:42.623 "traddr": "10.0.0.3", 00:09:42.623 "trsvcid": "4420" 00:09:42.623 }, 00:09:42.623 "peer_address": { 00:09:42.623 "trtype": "TCP", 00:09:42.623 "adrfam": "IPv4", 00:09:42.623 "traddr": "10.0.0.1", 00:09:42.623 "trsvcid": "43530" 00:09:42.623 }, 00:09:42.623 "auth": { 00:09:42.623 "state": "completed", 00:09:42.623 "digest": "sha256", 00:09:42.623 "dhgroup": "ffdhe2048" 00:09:42.623 } 00:09:42.623 } 00:09:42.623 ]' 00:09:42.623 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:42.882 10:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:43.140 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:43.140 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:43.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:43.707 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:43.965 10:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:44.223 00:09:44.223 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:44.223 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:44.223 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:44.481 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:44.481 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:44.481 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.481 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.481 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.481 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:44.481 { 00:09:44.481 "cntlid": 11, 00:09:44.481 "qid": 0, 00:09:44.481 "state": "enabled", 00:09:44.481 "thread": "nvmf_tgt_poll_group_000", 00:09:44.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:44.481 "listen_address": { 00:09:44.481 "trtype": "TCP", 00:09:44.481 "adrfam": "IPv4", 00:09:44.481 "traddr": "10.0.0.3", 00:09:44.481 "trsvcid": "4420" 00:09:44.481 }, 00:09:44.481 "peer_address": { 00:09:44.481 "trtype": "TCP", 00:09:44.481 "adrfam": "IPv4", 00:09:44.481 "traddr": "10.0.0.1", 00:09:44.481 "trsvcid": "33700" 00:09:44.481 }, 00:09:44.481 "auth": { 00:09:44.481 "state": "completed", 00:09:44.482 "digest": "sha256", 00:09:44.482 "dhgroup": "ffdhe2048" 00:09:44.482 } 00:09:44.482 } 00:09:44.482 ]' 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:44.482 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:44.739 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:44.740 10:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:45.305 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:45.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:45.305 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:45.305 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.305 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.305 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.306 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:45.306 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:45.306 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.564 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:45.822 00:09:45.822 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:45.822 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:45.822 10:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:46.080 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:46.080 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:46.080 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.080 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.080 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.080 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:46.080 { 00:09:46.080 "cntlid": 13, 00:09:46.080 "qid": 0, 00:09:46.080 "state": "enabled", 00:09:46.080 "thread": "nvmf_tgt_poll_group_000", 00:09:46.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:46.080 "listen_address": { 00:09:46.080 "trtype": "TCP", 00:09:46.080 "adrfam": "IPv4", 00:09:46.080 "traddr": "10.0.0.3", 00:09:46.080 "trsvcid": "4420" 00:09:46.080 }, 00:09:46.080 "peer_address": { 00:09:46.080 "trtype": "TCP", 00:09:46.080 "adrfam": "IPv4", 00:09:46.080 "traddr": "10.0.0.1", 00:09:46.080 "trsvcid": "33720" 00:09:46.080 }, 00:09:46.080 "auth": { 00:09:46.080 "state": "completed", 00:09:46.080 "digest": "sha256", 00:09:46.081 "dhgroup": "ffdhe2048" 00:09:46.081 } 00:09:46.081 } 00:09:46.081 ]' 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.081 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.339 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:46.339 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:46.905 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:46.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:46.905 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:46.905 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.905 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.905 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.905 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:46.906 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:46.906 10:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:47.163 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:47.420 00:09:47.421 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.421 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.421 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:47.678 { 00:09:47.678 "cntlid": 15, 00:09:47.678 "qid": 0, 00:09:47.678 "state": "enabled", 00:09:47.678 "thread": "nvmf_tgt_poll_group_000", 00:09:47.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:47.678 "listen_address": { 00:09:47.678 "trtype": "TCP", 00:09:47.678 "adrfam": "IPv4", 00:09:47.678 "traddr": "10.0.0.3", 00:09:47.678 "trsvcid": "4420" 00:09:47.678 }, 00:09:47.678 "peer_address": { 00:09:47.678 "trtype": "TCP", 00:09:47.678 "adrfam": "IPv4", 00:09:47.678 "traddr": "10.0.0.1", 00:09:47.678 "trsvcid": "33740" 00:09:47.678 }, 00:09:47.678 "auth": { 00:09:47.678 "state": "completed", 00:09:47.678 "digest": "sha256", 00:09:47.678 "dhgroup": "ffdhe2048" 00:09:47.678 } 00:09:47.678 } 00:09:47.678 ]' 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:47.678 10:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:47.941 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:09:47.941 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:48.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:48.528 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:48.529 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:48.787 10:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:49.045 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:49.304 { 00:09:49.304 "cntlid": 17, 00:09:49.304 "qid": 0, 00:09:49.304 "state": "enabled", 00:09:49.304 "thread": "nvmf_tgt_poll_group_000", 00:09:49.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:49.304 "listen_address": { 00:09:49.304 "trtype": "TCP", 00:09:49.304 "adrfam": "IPv4", 00:09:49.304 "traddr": "10.0.0.3", 00:09:49.304 "trsvcid": "4420" 00:09:49.304 }, 00:09:49.304 "peer_address": { 00:09:49.304 "trtype": "TCP", 00:09:49.304 "adrfam": "IPv4", 00:09:49.304 "traddr": "10.0.0.1", 00:09:49.304 "trsvcid": "33766" 00:09:49.304 }, 00:09:49.304 "auth": { 00:09:49.304 "state": "completed", 00:09:49.304 "digest": "sha256", 00:09:49.304 "dhgroup": "ffdhe3072" 00:09:49.304 } 00:09:49.304 } 00:09:49.304 ]' 00:09:49.304 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:49.561 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:49.561 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:49.562 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:49.562 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:49.562 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:49.562 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:49.562 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:49.820 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:49.820 10:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:50.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:50.387 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.645 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:50.904 00:09:50.904 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:50.904 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:50.904 10:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:51.163 { 00:09:51.163 "cntlid": 19, 00:09:51.163 "qid": 0, 00:09:51.163 "state": "enabled", 00:09:51.163 "thread": "nvmf_tgt_poll_group_000", 00:09:51.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:51.163 "listen_address": { 00:09:51.163 "trtype": "TCP", 00:09:51.163 "adrfam": "IPv4", 00:09:51.163 "traddr": "10.0.0.3", 00:09:51.163 "trsvcid": "4420" 00:09:51.163 }, 00:09:51.163 "peer_address": { 00:09:51.163 "trtype": "TCP", 00:09:51.163 "adrfam": "IPv4", 00:09:51.163 "traddr": "10.0.0.1", 00:09:51.163 "trsvcid": "33794" 00:09:51.163 }, 00:09:51.163 "auth": { 00:09:51.163 "state": "completed", 00:09:51.163 "digest": "sha256", 00:09:51.163 "dhgroup": "ffdhe3072" 00:09:51.163 } 00:09:51.163 } 00:09:51.163 ]' 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:51.163 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:51.164 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:51.164 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:51.422 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:51.423 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:51.990 10:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:51.990 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.248 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.249 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.249 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.249 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:52.507 00:09:52.507 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.507 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.507 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.765 { 00:09:52.765 "cntlid": 21, 00:09:52.765 "qid": 0, 00:09:52.765 "state": "enabled", 00:09:52.765 "thread": "nvmf_tgt_poll_group_000", 00:09:52.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:52.765 "listen_address": { 00:09:52.765 "trtype": "TCP", 00:09:52.765 "adrfam": "IPv4", 00:09:52.765 "traddr": "10.0.0.3", 00:09:52.765 "trsvcid": "4420" 00:09:52.765 }, 00:09:52.765 "peer_address": { 00:09:52.765 "trtype": "TCP", 00:09:52.765 "adrfam": "IPv4", 00:09:52.765 "traddr": "10.0.0.1", 00:09:52.765 "trsvcid": "33816" 00:09:52.765 }, 00:09:52.765 "auth": { 00:09:52.765 "state": "completed", 00:09:52.765 "digest": "sha256", 00:09:52.765 "dhgroup": "ffdhe3072" 00:09:52.765 } 00:09:52.765 } 00:09:52.765 ]' 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.765 10:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.024 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:53.024 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:53.592 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:53.851 10:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:54.109 00:09:54.109 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.109 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.109 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.367 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.367 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.367 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.367 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.367 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.367 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.367 { 00:09:54.367 "cntlid": 23, 00:09:54.367 "qid": 0, 00:09:54.367 "state": "enabled", 00:09:54.367 "thread": "nvmf_tgt_poll_group_000", 00:09:54.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:54.367 "listen_address": { 00:09:54.367 "trtype": "TCP", 00:09:54.367 "adrfam": "IPv4", 00:09:54.367 "traddr": "10.0.0.3", 00:09:54.367 "trsvcid": "4420" 00:09:54.367 }, 00:09:54.368 "peer_address": { 00:09:54.368 "trtype": "TCP", 00:09:54.368 "adrfam": "IPv4", 00:09:54.368 "traddr": "10.0.0.1", 00:09:54.368 "trsvcid": "41458" 00:09:54.368 }, 00:09:54.368 "auth": { 00:09:54.368 "state": "completed", 00:09:54.368 "digest": "sha256", 00:09:54.368 "dhgroup": "ffdhe3072" 00:09:54.368 } 00:09:54.368 } 00:09:54.368 ]' 00:09:54.368 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.368 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.368 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.368 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:54.368 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.626 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.626 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.626 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.626 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:09:54.626 10:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:09:55.191 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:55.450 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:56.017 00:09:56.017 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:56.017 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.017 10:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.017 { 00:09:56.017 "cntlid": 25, 00:09:56.017 "qid": 0, 00:09:56.017 "state": "enabled", 00:09:56.017 "thread": "nvmf_tgt_poll_group_000", 00:09:56.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:56.017 "listen_address": { 00:09:56.017 "trtype": "TCP", 00:09:56.017 "adrfam": "IPv4", 00:09:56.017 "traddr": "10.0.0.3", 00:09:56.017 "trsvcid": "4420" 00:09:56.017 }, 00:09:56.017 "peer_address": { 00:09:56.017 "trtype": "TCP", 00:09:56.017 "adrfam": "IPv4", 00:09:56.017 "traddr": "10.0.0.1", 00:09:56.017 "trsvcid": "41478" 00:09:56.017 }, 00:09:56.017 "auth": { 00:09:56.017 "state": "completed", 00:09:56.017 "digest": "sha256", 00:09:56.017 "dhgroup": "ffdhe4096" 00:09:56.017 } 00:09:56.017 } 00:09:56.017 ]' 00:09:56.017 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.275 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:56.534 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:56.534 10:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:57.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:57.101 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.360 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:57.619 00:09:57.619 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:57.619 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:57.619 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:57.878 { 00:09:57.878 "cntlid": 27, 00:09:57.878 "qid": 0, 00:09:57.878 "state": "enabled", 00:09:57.878 "thread": "nvmf_tgt_poll_group_000", 00:09:57.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:57.878 "listen_address": { 00:09:57.878 "trtype": "TCP", 00:09:57.878 "adrfam": "IPv4", 00:09:57.878 "traddr": "10.0.0.3", 00:09:57.878 "trsvcid": "4420" 00:09:57.878 }, 00:09:57.878 "peer_address": { 00:09:57.878 "trtype": "TCP", 00:09:57.878 "adrfam": "IPv4", 00:09:57.878 "traddr": "10.0.0.1", 00:09:57.878 "trsvcid": "41502" 00:09:57.878 }, 00:09:57.878 "auth": { 00:09:57.878 "state": "completed", 00:09:57.878 "digest": "sha256", 00:09:57.878 "dhgroup": "ffdhe4096" 00:09:57.878 } 00:09:57.878 } 00:09:57.878 ]' 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:57.878 10:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:58.137 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:58.137 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:09:58.730 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:58.731 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.990 10:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:59.249 00:09:59.249 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:59.249 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.249 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.509 { 00:09:59.509 "cntlid": 29, 00:09:59.509 "qid": 0, 00:09:59.509 "state": "enabled", 00:09:59.509 "thread": "nvmf_tgt_poll_group_000", 00:09:59.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:09:59.509 "listen_address": { 00:09:59.509 "trtype": "TCP", 00:09:59.509 "adrfam": "IPv4", 00:09:59.509 "traddr": "10.0.0.3", 00:09:59.509 "trsvcid": "4420" 00:09:59.509 }, 00:09:59.509 "peer_address": { 00:09:59.509 "trtype": "TCP", 00:09:59.509 "adrfam": "IPv4", 00:09:59.509 "traddr": "10.0.0.1", 00:09:59.509 "trsvcid": "41518" 00:09:59.509 }, 00:09:59.509 "auth": { 00:09:59.509 "state": "completed", 00:09:59.509 "digest": "sha256", 00:09:59.509 "dhgroup": "ffdhe4096" 00:09:59.509 } 00:09:59.509 } 00:09:59.509 ]' 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:59.509 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.769 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.769 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.769 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.769 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:09:59.769 10:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:00.337 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:00.595 10:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:00.853 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.112 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.112 { 00:10:01.112 "cntlid": 31, 00:10:01.112 "qid": 0, 00:10:01.112 "state": "enabled", 00:10:01.112 "thread": "nvmf_tgt_poll_group_000", 00:10:01.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:01.112 "listen_address": { 00:10:01.112 "trtype": "TCP", 00:10:01.112 "adrfam": "IPv4", 00:10:01.112 "traddr": "10.0.0.3", 00:10:01.112 "trsvcid": "4420" 00:10:01.112 }, 00:10:01.112 "peer_address": { 00:10:01.112 "trtype": "TCP", 00:10:01.112 "adrfam": "IPv4", 00:10:01.112 "traddr": "10.0.0.1", 00:10:01.112 "trsvcid": "41538" 00:10:01.112 }, 00:10:01.112 "auth": { 00:10:01.112 "state": "completed", 00:10:01.112 "digest": "sha256", 00:10:01.112 "dhgroup": "ffdhe4096" 00:10:01.112 } 00:10:01.112 } 00:10:01.112 ]' 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.370 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.628 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:01.628 10:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:02.193 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.452 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.019 00:10:03.019 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.019 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.019 10:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.019 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.019 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.019 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.019 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.020 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.020 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.020 { 00:10:03.020 "cntlid": 33, 00:10:03.020 "qid": 0, 00:10:03.020 "state": "enabled", 00:10:03.020 "thread": "nvmf_tgt_poll_group_000", 00:10:03.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:03.020 "listen_address": { 00:10:03.020 "trtype": "TCP", 00:10:03.020 "adrfam": "IPv4", 00:10:03.020 "traddr": "10.0.0.3", 00:10:03.020 "trsvcid": "4420" 00:10:03.020 }, 00:10:03.020 "peer_address": { 00:10:03.020 "trtype": "TCP", 00:10:03.020 "adrfam": "IPv4", 00:10:03.020 "traddr": "10.0.0.1", 00:10:03.020 "trsvcid": "41568" 00:10:03.020 }, 00:10:03.020 "auth": { 00:10:03.020 "state": "completed", 00:10:03.020 "digest": "sha256", 00:10:03.020 "dhgroup": "ffdhe6144" 00:10:03.020 } 00:10:03.020 } 00:10:03.020 ]' 00:10:03.020 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.280 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.538 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:03.538 10:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:04.104 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.362 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.621 00:10:04.621 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.621 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:04.621 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:04.878 { 00:10:04.878 "cntlid": 35, 00:10:04.878 "qid": 0, 00:10:04.878 "state": "enabled", 00:10:04.878 "thread": "nvmf_tgt_poll_group_000", 00:10:04.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:04.878 "listen_address": { 00:10:04.878 "trtype": "TCP", 00:10:04.878 "adrfam": "IPv4", 00:10:04.878 "traddr": "10.0.0.3", 00:10:04.878 "trsvcid": "4420" 00:10:04.878 }, 00:10:04.878 "peer_address": { 00:10:04.878 "trtype": "TCP", 00:10:04.878 "adrfam": "IPv4", 00:10:04.878 "traddr": "10.0.0.1", 00:10:04.878 "trsvcid": "39914" 00:10:04.878 }, 00:10:04.878 "auth": { 00:10:04.878 "state": "completed", 00:10:04.878 "digest": "sha256", 00:10:04.878 "dhgroup": "ffdhe6144" 00:10:04.878 } 00:10:04.878 } 00:10:04.878 ]' 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:04.878 10:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:04.878 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:04.878 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:04.878 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.136 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.136 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.136 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.136 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:05.137 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:05.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:05.706 10:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:05.965 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:06.531 00:10:06.531 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:06.531 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:06.531 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:06.789 { 00:10:06.789 "cntlid": 37, 00:10:06.789 "qid": 0, 00:10:06.789 "state": "enabled", 00:10:06.789 "thread": "nvmf_tgt_poll_group_000", 00:10:06.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:06.789 "listen_address": { 00:10:06.789 "trtype": "TCP", 00:10:06.789 "adrfam": "IPv4", 00:10:06.789 "traddr": "10.0.0.3", 00:10:06.789 "trsvcid": "4420" 00:10:06.789 }, 00:10:06.789 "peer_address": { 00:10:06.789 "trtype": "TCP", 00:10:06.789 "adrfam": "IPv4", 00:10:06.789 "traddr": "10.0.0.1", 00:10:06.789 "trsvcid": "39942" 00:10:06.789 }, 00:10:06.789 "auth": { 00:10:06.789 "state": "completed", 00:10:06.789 "digest": "sha256", 00:10:06.789 "dhgroup": "ffdhe6144" 00:10:06.789 } 00:10:06.789 } 00:10:06.789 ]' 00:10:06.789 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:06.790 10:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:07.048 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:07.048 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:07.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:07.614 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.872 10:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.872 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.872 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:07.872 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:07.872 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:08.438 00:10:08.438 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:08.439 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:08.439 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:08.697 { 00:10:08.697 "cntlid": 39, 00:10:08.697 "qid": 0, 00:10:08.697 "state": "enabled", 00:10:08.697 "thread": "nvmf_tgt_poll_group_000", 00:10:08.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:08.697 "listen_address": { 00:10:08.697 "trtype": "TCP", 00:10:08.697 "adrfam": "IPv4", 00:10:08.697 "traddr": "10.0.0.3", 00:10:08.697 "trsvcid": "4420" 00:10:08.697 }, 00:10:08.697 "peer_address": { 00:10:08.697 "trtype": "TCP", 00:10:08.697 "adrfam": "IPv4", 00:10:08.697 "traddr": "10.0.0.1", 00:10:08.697 "trsvcid": "39976" 00:10:08.697 }, 00:10:08.697 "auth": { 00:10:08.697 "state": "completed", 00:10:08.697 "digest": "sha256", 00:10:08.697 "dhgroup": "ffdhe6144" 00:10:08.697 } 00:10:08.697 } 00:10:08.697 ]' 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:08.697 10:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.955 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:08.955 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:09.523 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:09.782 10:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:10.350 00:10:10.350 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.350 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.350 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.606 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.606 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.606 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.606 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.863 { 00:10:10.863 "cntlid": 41, 00:10:10.863 "qid": 0, 00:10:10.863 "state": "enabled", 00:10:10.863 "thread": "nvmf_tgt_poll_group_000", 00:10:10.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:10.863 "listen_address": { 00:10:10.863 "trtype": "TCP", 00:10:10.863 "adrfam": "IPv4", 00:10:10.863 "traddr": "10.0.0.3", 00:10:10.863 "trsvcid": "4420" 00:10:10.863 }, 00:10:10.863 "peer_address": { 00:10:10.863 "trtype": "TCP", 00:10:10.863 "adrfam": "IPv4", 00:10:10.863 "traddr": "10.0.0.1", 00:10:10.863 "trsvcid": "40010" 00:10:10.863 }, 00:10:10.863 "auth": { 00:10:10.863 "state": "completed", 00:10:10.863 "digest": "sha256", 00:10:10.863 "dhgroup": "ffdhe8192" 00:10:10.863 } 00:10:10.863 } 00:10:10.863 ]' 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.863 10:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.151 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:11.151 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:11.788 10:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.054 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:12.627 00:10:12.627 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.627 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.627 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.886 { 00:10:12.886 "cntlid": 43, 00:10:12.886 "qid": 0, 00:10:12.886 "state": "enabled", 00:10:12.886 "thread": "nvmf_tgt_poll_group_000", 00:10:12.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:12.886 "listen_address": { 00:10:12.886 "trtype": "TCP", 00:10:12.886 "adrfam": "IPv4", 00:10:12.886 "traddr": "10.0.0.3", 00:10:12.886 "trsvcid": "4420" 00:10:12.886 }, 00:10:12.886 "peer_address": { 00:10:12.886 "trtype": "TCP", 00:10:12.886 "adrfam": "IPv4", 00:10:12.886 "traddr": "10.0.0.1", 00:10:12.886 "trsvcid": "40040" 00:10:12.886 }, 00:10:12.886 "auth": { 00:10:12.886 "state": "completed", 00:10:12.886 "digest": "sha256", 00:10:12.886 "dhgroup": "ffdhe8192" 00:10:12.886 } 00:10:12.886 } 00:10:12.886 ]' 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:12.886 10:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.145 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:13.145 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:13.713 10:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.971 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.972 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:13.972 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:14.538 00:10:14.538 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.538 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.538 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:14.797 { 00:10:14.797 "cntlid": 45, 00:10:14.797 "qid": 0, 00:10:14.797 "state": "enabled", 00:10:14.797 "thread": "nvmf_tgt_poll_group_000", 00:10:14.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:14.797 "listen_address": { 00:10:14.797 "trtype": "TCP", 00:10:14.797 "adrfam": "IPv4", 00:10:14.797 "traddr": "10.0.0.3", 00:10:14.797 "trsvcid": "4420" 00:10:14.797 }, 00:10:14.797 "peer_address": { 00:10:14.797 "trtype": "TCP", 00:10:14.797 "adrfam": "IPv4", 00:10:14.797 "traddr": "10.0.0.1", 00:10:14.797 "trsvcid": "35254" 00:10:14.797 }, 00:10:14.797 "auth": { 00:10:14.797 "state": "completed", 00:10:14.797 "digest": "sha256", 00:10:14.797 "dhgroup": "ffdhe8192" 00:10:14.797 } 00:10:14.797 } 00:10:14.797 ]' 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:14.797 10:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.055 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.055 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.055 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.313 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:15.313 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:15.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:15.877 10:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.136 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:16.705 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.705 { 00:10:16.705 "cntlid": 47, 00:10:16.705 "qid": 0, 00:10:16.705 "state": "enabled", 00:10:16.705 "thread": "nvmf_tgt_poll_group_000", 00:10:16.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:16.705 "listen_address": { 00:10:16.705 "trtype": "TCP", 00:10:16.705 "adrfam": "IPv4", 00:10:16.705 "traddr": "10.0.0.3", 00:10:16.705 "trsvcid": "4420" 00:10:16.705 }, 00:10:16.705 "peer_address": { 00:10:16.705 "trtype": "TCP", 00:10:16.705 "adrfam": "IPv4", 00:10:16.705 "traddr": "10.0.0.1", 00:10:16.705 "trsvcid": "35280" 00:10:16.705 }, 00:10:16.705 "auth": { 00:10:16.705 "state": "completed", 00:10:16.705 "digest": "sha256", 00:10:16.705 "dhgroup": "ffdhe8192" 00:10:16.705 } 00:10:16.705 } 00:10:16.705 ]' 00:10:16.705 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:16.964 10:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.223 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:17.223 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:17.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:17.792 10:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.051 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.052 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.052 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.052 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.052 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.052 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.052 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:18.311 00:10:18.311 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.311 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.311 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:18.571 { 00:10:18.571 "cntlid": 49, 00:10:18.571 "qid": 0, 00:10:18.571 "state": "enabled", 00:10:18.571 "thread": "nvmf_tgt_poll_group_000", 00:10:18.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:18.571 "listen_address": { 00:10:18.571 "trtype": "TCP", 00:10:18.571 "adrfam": "IPv4", 00:10:18.571 "traddr": "10.0.0.3", 00:10:18.571 "trsvcid": "4420" 00:10:18.571 }, 00:10:18.571 "peer_address": { 00:10:18.571 "trtype": "TCP", 00:10:18.571 "adrfam": "IPv4", 00:10:18.571 "traddr": "10.0.0.1", 00:10:18.571 "trsvcid": "35314" 00:10:18.571 }, 00:10:18.571 "auth": { 00:10:18.571 "state": "completed", 00:10:18.571 "digest": "sha384", 00:10:18.571 "dhgroup": "null" 00:10:18.571 } 00:10:18.571 } 00:10:18.571 ]' 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:18.571 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:18.831 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.831 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.831 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.831 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:18.831 10:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:19.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:19.766 10:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:20.025 00:10:20.025 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.025 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.025 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.283 { 00:10:20.283 "cntlid": 51, 00:10:20.283 "qid": 0, 00:10:20.283 "state": "enabled", 00:10:20.283 "thread": "nvmf_tgt_poll_group_000", 00:10:20.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:20.283 "listen_address": { 00:10:20.283 "trtype": "TCP", 00:10:20.283 "adrfam": "IPv4", 00:10:20.283 "traddr": "10.0.0.3", 00:10:20.283 "trsvcid": "4420" 00:10:20.283 }, 00:10:20.283 "peer_address": { 00:10:20.283 "trtype": "TCP", 00:10:20.283 "adrfam": "IPv4", 00:10:20.283 "traddr": "10.0.0.1", 00:10:20.283 "trsvcid": "35340" 00:10:20.283 }, 00:10:20.283 "auth": { 00:10:20.283 "state": "completed", 00:10:20.283 "digest": "sha384", 00:10:20.283 "dhgroup": "null" 00:10:20.283 } 00:10:20.283 } 00:10:20.283 ]' 00:10:20.283 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.541 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.800 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:20.800 10:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:21.373 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:21.637 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:21.637 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.637 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:21.637 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:21.637 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:21.637 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.638 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:21.896 00:10:21.896 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:21.896 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.896 10:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.155 { 00:10:22.155 "cntlid": 53, 00:10:22.155 "qid": 0, 00:10:22.155 "state": "enabled", 00:10:22.155 "thread": "nvmf_tgt_poll_group_000", 00:10:22.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:22.155 "listen_address": { 00:10:22.155 "trtype": "TCP", 00:10:22.155 "adrfam": "IPv4", 00:10:22.155 "traddr": "10.0.0.3", 00:10:22.155 "trsvcid": "4420" 00:10:22.155 }, 00:10:22.155 "peer_address": { 00:10:22.155 "trtype": "TCP", 00:10:22.155 "adrfam": "IPv4", 00:10:22.155 "traddr": "10.0.0.1", 00:10:22.155 "trsvcid": "35366" 00:10:22.155 }, 00:10:22.155 "auth": { 00:10:22.155 "state": "completed", 00:10:22.155 "digest": "sha384", 00:10:22.155 "dhgroup": "null" 00:10:22.155 } 00:10:22.155 } 00:10:22.155 ]' 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.155 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.414 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:22.414 10:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:22.982 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.242 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:23.501 00:10:23.501 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.758 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.758 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.017 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.017 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.017 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.017 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.017 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.017 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.017 { 00:10:24.017 "cntlid": 55, 00:10:24.017 "qid": 0, 00:10:24.017 "state": "enabled", 00:10:24.017 "thread": "nvmf_tgt_poll_group_000", 00:10:24.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:24.017 "listen_address": { 00:10:24.017 "trtype": "TCP", 00:10:24.017 "adrfam": "IPv4", 00:10:24.017 "traddr": "10.0.0.3", 00:10:24.017 "trsvcid": "4420" 00:10:24.017 }, 00:10:24.017 "peer_address": { 00:10:24.017 "trtype": "TCP", 00:10:24.018 "adrfam": "IPv4", 00:10:24.018 "traddr": "10.0.0.1", 00:10:24.018 "trsvcid": "35404" 00:10:24.018 }, 00:10:24.018 "auth": { 00:10:24.018 "state": "completed", 00:10:24.018 "digest": "sha384", 00:10:24.018 "dhgroup": "null" 00:10:24.018 } 00:10:24.018 } 00:10:24.018 ]' 00:10:24.018 10:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.018 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.277 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:24.277 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:24.846 10:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.106 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:25.363 00:10:25.363 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:25.363 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:25.363 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:25.622 { 00:10:25.622 "cntlid": 57, 00:10:25.622 "qid": 0, 00:10:25.622 "state": "enabled", 00:10:25.622 "thread": "nvmf_tgt_poll_group_000", 00:10:25.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:25.622 "listen_address": { 00:10:25.622 "trtype": "TCP", 00:10:25.622 "adrfam": "IPv4", 00:10:25.622 "traddr": "10.0.0.3", 00:10:25.622 "trsvcid": "4420" 00:10:25.622 }, 00:10:25.622 "peer_address": { 00:10:25.622 "trtype": "TCP", 00:10:25.622 "adrfam": "IPv4", 00:10:25.622 "traddr": "10.0.0.1", 00:10:25.622 "trsvcid": "37644" 00:10:25.622 }, 00:10:25.622 "auth": { 00:10:25.622 "state": "completed", 00:10:25.622 "digest": "sha384", 00:10:25.622 "dhgroup": "ffdhe2048" 00:10:25.622 } 00:10:25.622 } 00:10:25.622 ]' 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.622 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.882 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:25.882 10:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:26.451 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.711 10:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.971 00:10:26.971 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.971 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.971 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.230 { 00:10:27.230 "cntlid": 59, 00:10:27.230 "qid": 0, 00:10:27.230 "state": "enabled", 00:10:27.230 "thread": "nvmf_tgt_poll_group_000", 00:10:27.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:27.230 "listen_address": { 00:10:27.230 "trtype": "TCP", 00:10:27.230 "adrfam": "IPv4", 00:10:27.230 "traddr": "10.0.0.3", 00:10:27.230 "trsvcid": "4420" 00:10:27.230 }, 00:10:27.230 "peer_address": { 00:10:27.230 "trtype": "TCP", 00:10:27.230 "adrfam": "IPv4", 00:10:27.230 "traddr": "10.0.0.1", 00:10:27.230 "trsvcid": "37678" 00:10:27.230 }, 00:10:27.230 "auth": { 00:10:27.230 "state": "completed", 00:10:27.230 "digest": "sha384", 00:10:27.230 "dhgroup": "ffdhe2048" 00:10:27.230 } 00:10:27.230 } 00:10:27.230 ]' 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:27.230 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.489 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:27.489 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.489 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.489 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.489 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.749 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:27.749 10:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.319 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.578 00:10:28.837 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.837 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.838 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.838 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.838 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.838 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.838 10:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.838 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.096 { 00:10:29.096 "cntlid": 61, 00:10:29.096 "qid": 0, 00:10:29.096 "state": "enabled", 00:10:29.096 "thread": "nvmf_tgt_poll_group_000", 00:10:29.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:29.096 "listen_address": { 00:10:29.096 "trtype": "TCP", 00:10:29.096 "adrfam": "IPv4", 00:10:29.096 "traddr": "10.0.0.3", 00:10:29.096 "trsvcid": "4420" 00:10:29.096 }, 00:10:29.096 "peer_address": { 00:10:29.096 "trtype": "TCP", 00:10:29.096 "adrfam": "IPv4", 00:10:29.096 "traddr": "10.0.0.1", 00:10:29.096 "trsvcid": "37718" 00:10:29.096 }, 00:10:29.096 "auth": { 00:10:29.096 "state": "completed", 00:10:29.096 "digest": "sha384", 00:10:29.096 "dhgroup": "ffdhe2048" 00:10:29.096 } 00:10:29.096 } 00:10:29.096 ]' 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.096 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.355 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:29.355 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:29.922 10:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.181 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.440 00:10:30.440 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.440 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.440 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.700 { 00:10:30.700 "cntlid": 63, 00:10:30.700 "qid": 0, 00:10:30.700 "state": "enabled", 00:10:30.700 "thread": "nvmf_tgt_poll_group_000", 00:10:30.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:30.700 "listen_address": { 00:10:30.700 "trtype": "TCP", 00:10:30.700 "adrfam": "IPv4", 00:10:30.700 "traddr": "10.0.0.3", 00:10:30.700 "trsvcid": "4420" 00:10:30.700 }, 00:10:30.700 "peer_address": { 00:10:30.700 "trtype": "TCP", 00:10:30.700 "adrfam": "IPv4", 00:10:30.700 "traddr": "10.0.0.1", 00:10:30.700 "trsvcid": "37752" 00:10:30.700 }, 00:10:30.700 "auth": { 00:10:30.700 "state": "completed", 00:10:30.700 "digest": "sha384", 00:10:30.700 "dhgroup": "ffdhe2048" 00:10:30.700 } 00:10:30.700 } 00:10:30.700 ]' 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:30.700 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.959 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.959 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.959 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.959 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.959 10:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.217 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:31.217 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:31.784 10:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.187 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.187 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.445 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.703 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.703 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.703 { 00:10:32.703 "cntlid": 65, 00:10:32.703 "qid": 0, 00:10:32.703 "state": "enabled", 00:10:32.703 "thread": "nvmf_tgt_poll_group_000", 00:10:32.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:32.703 "listen_address": { 00:10:32.703 "trtype": "TCP", 00:10:32.703 "adrfam": "IPv4", 00:10:32.703 "traddr": "10.0.0.3", 00:10:32.703 "trsvcid": "4420" 00:10:32.703 }, 00:10:32.703 "peer_address": { 00:10:32.703 "trtype": "TCP", 00:10:32.703 "adrfam": "IPv4", 00:10:32.703 "traddr": "10.0.0.1", 00:10:32.703 "trsvcid": "37770" 00:10:32.703 }, 00:10:32.703 "auth": { 00:10:32.703 "state": "completed", 00:10:32.703 "digest": "sha384", 00:10:32.703 "dhgroup": "ffdhe3072" 00:10:32.703 } 00:10:32.703 } 00:10:32.703 ]' 00:10:32.703 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.703 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:32.703 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.703 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:32.704 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.704 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.704 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.704 10:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.962 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:32.962 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:33.529 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:33.788 10:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.047 00:10:34.047 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.047 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.047 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.305 { 00:10:34.305 "cntlid": 67, 00:10:34.305 "qid": 0, 00:10:34.305 "state": "enabled", 00:10:34.305 "thread": "nvmf_tgt_poll_group_000", 00:10:34.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:34.305 "listen_address": { 00:10:34.305 "trtype": "TCP", 00:10:34.305 "adrfam": "IPv4", 00:10:34.305 "traddr": "10.0.0.3", 00:10:34.305 "trsvcid": "4420" 00:10:34.305 }, 00:10:34.305 "peer_address": { 00:10:34.305 "trtype": "TCP", 00:10:34.305 "adrfam": "IPv4", 00:10:34.305 "traddr": "10.0.0.1", 00:10:34.305 "trsvcid": "41548" 00:10:34.305 }, 00:10:34.305 "auth": { 00:10:34.305 "state": "completed", 00:10:34.305 "digest": "sha384", 00:10:34.305 "dhgroup": "ffdhe3072" 00:10:34.305 } 00:10:34.305 } 00:10:34.305 ]' 00:10:34.305 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.564 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.823 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:34.823 10:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:35.391 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.650 10:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.909 00:10:35.909 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.909 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.909 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.167 { 00:10:36.167 "cntlid": 69, 00:10:36.167 "qid": 0, 00:10:36.167 "state": "enabled", 00:10:36.167 "thread": "nvmf_tgt_poll_group_000", 00:10:36.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:36.167 "listen_address": { 00:10:36.167 "trtype": "TCP", 00:10:36.167 "adrfam": "IPv4", 00:10:36.167 "traddr": "10.0.0.3", 00:10:36.167 "trsvcid": "4420" 00:10:36.167 }, 00:10:36.167 "peer_address": { 00:10:36.167 "trtype": "TCP", 00:10:36.167 "adrfam": "IPv4", 00:10:36.167 "traddr": "10.0.0.1", 00:10:36.167 "trsvcid": "41576" 00:10:36.167 }, 00:10:36.167 "auth": { 00:10:36.167 "state": "completed", 00:10:36.167 "digest": "sha384", 00:10:36.167 "dhgroup": "ffdhe3072" 00:10:36.167 } 00:10:36.167 } 00:10:36.167 ]' 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:36.167 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.426 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:36.426 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.426 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.426 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.426 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.684 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:36.684 10:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:37.250 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.510 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:37.769 00:10:37.769 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.769 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.769 10:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.029 { 00:10:38.029 "cntlid": 71, 00:10:38.029 "qid": 0, 00:10:38.029 "state": "enabled", 00:10:38.029 "thread": "nvmf_tgt_poll_group_000", 00:10:38.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:38.029 "listen_address": { 00:10:38.029 "trtype": "TCP", 00:10:38.029 "adrfam": "IPv4", 00:10:38.029 "traddr": "10.0.0.3", 00:10:38.029 "trsvcid": "4420" 00:10:38.029 }, 00:10:38.029 "peer_address": { 00:10:38.029 "trtype": "TCP", 00:10:38.029 "adrfam": "IPv4", 00:10:38.029 "traddr": "10.0.0.1", 00:10:38.029 "trsvcid": "41604" 00:10:38.029 }, 00:10:38.029 "auth": { 00:10:38.029 "state": "completed", 00:10:38.029 "digest": "sha384", 00:10:38.029 "dhgroup": "ffdhe3072" 00:10:38.029 } 00:10:38.029 } 00:10:38.029 ]' 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.029 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.289 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:38.289 10:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:38.857 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.857 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:38.857 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.857 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.115 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.682 00:10:39.682 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.682 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.682 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.682 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.682 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.683 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.683 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.940 { 00:10:39.940 "cntlid": 73, 00:10:39.940 "qid": 0, 00:10:39.940 "state": "enabled", 00:10:39.940 "thread": "nvmf_tgt_poll_group_000", 00:10:39.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:39.940 "listen_address": { 00:10:39.940 "trtype": "TCP", 00:10:39.940 "adrfam": "IPv4", 00:10:39.940 "traddr": "10.0.0.3", 00:10:39.940 "trsvcid": "4420" 00:10:39.940 }, 00:10:39.940 "peer_address": { 00:10:39.940 "trtype": "TCP", 00:10:39.940 "adrfam": "IPv4", 00:10:39.940 "traddr": "10.0.0.1", 00:10:39.940 "trsvcid": "41632" 00:10:39.940 }, 00:10:39.940 "auth": { 00:10:39.940 "state": "completed", 00:10:39.940 "digest": "sha384", 00:10:39.940 "dhgroup": "ffdhe4096" 00:10:39.940 } 00:10:39.940 } 00:10:39.940 ]' 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.940 10:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.199 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:40.199 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:40.765 10:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.023 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:41.281 00:10:41.281 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.281 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:41.281 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.539 { 00:10:41.539 "cntlid": 75, 00:10:41.539 "qid": 0, 00:10:41.539 "state": "enabled", 00:10:41.539 "thread": "nvmf_tgt_poll_group_000", 00:10:41.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:41.539 "listen_address": { 00:10:41.539 "trtype": "TCP", 00:10:41.539 "adrfam": "IPv4", 00:10:41.539 "traddr": "10.0.0.3", 00:10:41.539 "trsvcid": "4420" 00:10:41.539 }, 00:10:41.539 "peer_address": { 00:10:41.539 "trtype": "TCP", 00:10:41.539 "adrfam": "IPv4", 00:10:41.539 "traddr": "10.0.0.1", 00:10:41.539 "trsvcid": "41652" 00:10:41.539 }, 00:10:41.539 "auth": { 00:10:41.539 "state": "completed", 00:10:41.539 "digest": "sha384", 00:10:41.539 "dhgroup": "ffdhe4096" 00:10:41.539 } 00:10:41.539 } 00:10:41.539 ]' 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:41.539 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.798 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.798 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.798 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.798 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:41.798 10:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:42.367 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.626 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.627 10:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.886 00:10:43.145 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.145 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.145 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.146 { 00:10:43.146 "cntlid": 77, 00:10:43.146 "qid": 0, 00:10:43.146 "state": "enabled", 00:10:43.146 "thread": "nvmf_tgt_poll_group_000", 00:10:43.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:43.146 "listen_address": { 00:10:43.146 "trtype": "TCP", 00:10:43.146 "adrfam": "IPv4", 00:10:43.146 "traddr": "10.0.0.3", 00:10:43.146 "trsvcid": "4420" 00:10:43.146 }, 00:10:43.146 "peer_address": { 00:10:43.146 "trtype": "TCP", 00:10:43.146 "adrfam": "IPv4", 00:10:43.146 "traddr": "10.0.0.1", 00:10:43.146 "trsvcid": "41674" 00:10:43.146 }, 00:10:43.146 "auth": { 00:10:43.146 "state": "completed", 00:10:43.146 "digest": "sha384", 00:10:43.146 "dhgroup": "ffdhe4096" 00:10:43.146 } 00:10:43.146 } 00:10:43.146 ]' 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.146 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.405 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:43.405 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.405 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.405 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.405 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.664 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:43.664 10:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:44.233 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.492 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.752 00:10:44.752 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.752 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:44.752 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.011 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.011 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.011 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.011 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.011 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.011 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.011 { 00:10:45.011 "cntlid": 79, 00:10:45.011 "qid": 0, 00:10:45.011 "state": "enabled", 00:10:45.011 "thread": "nvmf_tgt_poll_group_000", 00:10:45.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:45.011 "listen_address": { 00:10:45.011 "trtype": "TCP", 00:10:45.011 "adrfam": "IPv4", 00:10:45.011 "traddr": "10.0.0.3", 00:10:45.011 "trsvcid": "4420" 00:10:45.011 }, 00:10:45.011 "peer_address": { 00:10:45.012 "trtype": "TCP", 00:10:45.012 "adrfam": "IPv4", 00:10:45.012 "traddr": "10.0.0.1", 00:10:45.012 "trsvcid": "41850" 00:10:45.012 }, 00:10:45.012 "auth": { 00:10:45.012 "state": "completed", 00:10:45.012 "digest": "sha384", 00:10:45.012 "dhgroup": "ffdhe4096" 00:10:45.012 } 00:10:45.012 } 00:10:45.012 ]' 00:10:45.012 10:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.012 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.271 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:45.271 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:45.839 10:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.098 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.356 00:10:46.356 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.356 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.356 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.615 { 00:10:46.615 "cntlid": 81, 00:10:46.615 "qid": 0, 00:10:46.615 "state": "enabled", 00:10:46.615 "thread": "nvmf_tgt_poll_group_000", 00:10:46.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:46.615 "listen_address": { 00:10:46.615 "trtype": "TCP", 00:10:46.615 "adrfam": "IPv4", 00:10:46.615 "traddr": "10.0.0.3", 00:10:46.615 "trsvcid": "4420" 00:10:46.615 }, 00:10:46.615 "peer_address": { 00:10:46.615 "trtype": "TCP", 00:10:46.615 "adrfam": "IPv4", 00:10:46.615 "traddr": "10.0.0.1", 00:10:46.615 "trsvcid": "41878" 00:10:46.615 }, 00:10:46.615 "auth": { 00:10:46.615 "state": "completed", 00:10:46.615 "digest": "sha384", 00:10:46.615 "dhgroup": "ffdhe6144" 00:10:46.615 } 00:10:46.615 } 00:10:46.615 ]' 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.615 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.873 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.873 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.873 10:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.131 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:47.132 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.701 10:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.271 00:10:48.271 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.271 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.271 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.531 { 00:10:48.531 "cntlid": 83, 00:10:48.531 "qid": 0, 00:10:48.531 "state": "enabled", 00:10:48.531 "thread": "nvmf_tgt_poll_group_000", 00:10:48.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:48.531 "listen_address": { 00:10:48.531 "trtype": "TCP", 00:10:48.531 "adrfam": "IPv4", 00:10:48.531 "traddr": "10.0.0.3", 00:10:48.531 "trsvcid": "4420" 00:10:48.531 }, 00:10:48.531 "peer_address": { 00:10:48.531 "trtype": "TCP", 00:10:48.531 "adrfam": "IPv4", 00:10:48.531 "traddr": "10.0.0.1", 00:10:48.531 "trsvcid": "41908" 00:10:48.531 }, 00:10:48.531 "auth": { 00:10:48.531 "state": "completed", 00:10:48.531 "digest": "sha384", 00:10:48.531 "dhgroup": "ffdhe6144" 00:10:48.531 } 00:10:48.531 } 00:10:48.531 ]' 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.531 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.792 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:48.792 10:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:49.362 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.623 10:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.883 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.142 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:50.142 { 00:10:50.142 "cntlid": 85, 00:10:50.142 "qid": 0, 00:10:50.142 "state": "enabled", 00:10:50.142 "thread": "nvmf_tgt_poll_group_000", 00:10:50.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:50.142 "listen_address": { 00:10:50.142 "trtype": "TCP", 00:10:50.142 "adrfam": "IPv4", 00:10:50.142 "traddr": "10.0.0.3", 00:10:50.142 "trsvcid": "4420" 00:10:50.142 }, 00:10:50.142 "peer_address": { 00:10:50.142 "trtype": "TCP", 00:10:50.142 "adrfam": "IPv4", 00:10:50.142 "traddr": "10.0.0.1", 00:10:50.142 "trsvcid": "41930" 00:10:50.142 }, 00:10:50.142 "auth": { 00:10:50.142 "state": "completed", 00:10:50.142 "digest": "sha384", 00:10:50.142 "dhgroup": "ffdhe6144" 00:10:50.142 } 00:10:50.143 } 00:10:50.143 ]' 00:10:50.143 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.402 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.662 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:50.662 10:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.244 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:51.836 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.836 { 00:10:51.836 "cntlid": 87, 00:10:51.836 "qid": 0, 00:10:51.836 "state": "enabled", 00:10:51.836 "thread": "nvmf_tgt_poll_group_000", 00:10:51.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:51.836 "listen_address": { 00:10:51.836 "trtype": "TCP", 00:10:51.836 "adrfam": "IPv4", 00:10:51.836 "traddr": "10.0.0.3", 00:10:51.836 "trsvcid": "4420" 00:10:51.836 }, 00:10:51.836 "peer_address": { 00:10:51.836 "trtype": "TCP", 00:10:51.836 "adrfam": "IPv4", 00:10:51.836 "traddr": "10.0.0.1", 00:10:51.836 "trsvcid": "41946" 00:10:51.836 }, 00:10:51.836 "auth": { 00:10:51.836 "state": "completed", 00:10:51.836 "digest": "sha384", 00:10:51.836 "dhgroup": "ffdhe6144" 00:10:51.836 } 00:10:51.836 } 00:10:51.836 ]' 00:10:51.836 10:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.098 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.358 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:52.358 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:52.926 10:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.926 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.495 00:10:53.495 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.495 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.495 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.755 { 00:10:53.755 "cntlid": 89, 00:10:53.755 "qid": 0, 00:10:53.755 "state": "enabled", 00:10:53.755 "thread": "nvmf_tgt_poll_group_000", 00:10:53.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:53.755 "listen_address": { 00:10:53.755 "trtype": "TCP", 00:10:53.755 "adrfam": "IPv4", 00:10:53.755 "traddr": "10.0.0.3", 00:10:53.755 "trsvcid": "4420" 00:10:53.755 }, 00:10:53.755 "peer_address": { 00:10:53.755 "trtype": "TCP", 00:10:53.755 "adrfam": "IPv4", 00:10:53.755 "traddr": "10.0.0.1", 00:10:53.755 "trsvcid": "41972" 00:10:53.755 }, 00:10:53.755 "auth": { 00:10:53.755 "state": "completed", 00:10:53.755 "digest": "sha384", 00:10:53.755 "dhgroup": "ffdhe8192" 00:10:53.755 } 00:10:53.755 } 00:10:53.755 ]' 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.755 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:54.014 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.014 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.014 10:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.014 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:54.014 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:54.582 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.842 10:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.411 00:10:55.411 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.411 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.411 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.671 { 00:10:55.671 "cntlid": 91, 00:10:55.671 "qid": 0, 00:10:55.671 "state": "enabled", 00:10:55.671 "thread": "nvmf_tgt_poll_group_000", 00:10:55.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:55.671 "listen_address": { 00:10:55.671 "trtype": "TCP", 00:10:55.671 "adrfam": "IPv4", 00:10:55.671 "traddr": "10.0.0.3", 00:10:55.671 "trsvcid": "4420" 00:10:55.671 }, 00:10:55.671 "peer_address": { 00:10:55.671 "trtype": "TCP", 00:10:55.671 "adrfam": "IPv4", 00:10:55.671 "traddr": "10.0.0.1", 00:10:55.671 "trsvcid": "38294" 00:10:55.671 }, 00:10:55.671 "auth": { 00:10:55.671 "state": "completed", 00:10:55.671 "digest": "sha384", 00:10:55.671 "dhgroup": "ffdhe8192" 00:10:55.671 } 00:10:55.671 } 00:10:55.671 ]' 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:55.671 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.930 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.930 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.930 10:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.930 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:55.931 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:56.499 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.758 10:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.325 00:10:57.325 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.325 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.325 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.585 { 00:10:57.585 "cntlid": 93, 00:10:57.585 "qid": 0, 00:10:57.585 "state": "enabled", 00:10:57.585 "thread": "nvmf_tgt_poll_group_000", 00:10:57.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:57.585 "listen_address": { 00:10:57.585 "trtype": "TCP", 00:10:57.585 "adrfam": "IPv4", 00:10:57.585 "traddr": "10.0.0.3", 00:10:57.585 "trsvcid": "4420" 00:10:57.585 }, 00:10:57.585 "peer_address": { 00:10:57.585 "trtype": "TCP", 00:10:57.585 "adrfam": "IPv4", 00:10:57.585 "traddr": "10.0.0.1", 00:10:57.585 "trsvcid": "38324" 00:10:57.585 }, 00:10:57.585 "auth": { 00:10:57.585 "state": "completed", 00:10:57.585 "digest": "sha384", 00:10:57.585 "dhgroup": "ffdhe8192" 00:10:57.585 } 00:10:57.585 } 00:10:57.585 ]' 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.585 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.844 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:57.844 10:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:58.412 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.671 10:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:59.239 00:10:59.240 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.240 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.240 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.497 { 00:10:59.497 "cntlid": 95, 00:10:59.497 "qid": 0, 00:10:59.497 "state": "enabled", 00:10:59.497 "thread": "nvmf_tgt_poll_group_000", 00:10:59.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:10:59.497 "listen_address": { 00:10:59.497 "trtype": "TCP", 00:10:59.497 "adrfam": "IPv4", 00:10:59.497 "traddr": "10.0.0.3", 00:10:59.497 "trsvcid": "4420" 00:10:59.497 }, 00:10:59.497 "peer_address": { 00:10:59.497 "trtype": "TCP", 00:10:59.497 "adrfam": "IPv4", 00:10:59.497 "traddr": "10.0.0.1", 00:10:59.497 "trsvcid": "38354" 00:10:59.497 }, 00:10:59.497 "auth": { 00:10:59.497 "state": "completed", 00:10:59.497 "digest": "sha384", 00:10:59.497 "dhgroup": "ffdhe8192" 00:10:59.497 } 00:10:59.497 } 00:10:59.497 ]' 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.497 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.756 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:10:59.756 10:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:00.353 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.612 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.871 00:11:00.871 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.871 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.871 10:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.130 { 00:11:01.130 "cntlid": 97, 00:11:01.130 "qid": 0, 00:11:01.130 "state": "enabled", 00:11:01.130 "thread": "nvmf_tgt_poll_group_000", 00:11:01.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:01.130 "listen_address": { 00:11:01.130 "trtype": "TCP", 00:11:01.130 "adrfam": "IPv4", 00:11:01.130 "traddr": "10.0.0.3", 00:11:01.130 "trsvcid": "4420" 00:11:01.130 }, 00:11:01.130 "peer_address": { 00:11:01.130 "trtype": "TCP", 00:11:01.130 "adrfam": "IPv4", 00:11:01.130 "traddr": "10.0.0.1", 00:11:01.130 "trsvcid": "38366" 00:11:01.130 }, 00:11:01.130 "auth": { 00:11:01.130 "state": "completed", 00:11:01.130 "digest": "sha512", 00:11:01.130 "dhgroup": "null" 00:11:01.130 } 00:11:01.130 } 00:11:01.130 ]' 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.130 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.387 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:01.387 10:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:01.953 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:01.954 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.211 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.470 00:11:02.470 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.470 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.470 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.730 { 00:11:02.730 "cntlid": 99, 00:11:02.730 "qid": 0, 00:11:02.730 "state": "enabled", 00:11:02.730 "thread": "nvmf_tgt_poll_group_000", 00:11:02.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:02.730 "listen_address": { 00:11:02.730 "trtype": "TCP", 00:11:02.730 "adrfam": "IPv4", 00:11:02.730 "traddr": "10.0.0.3", 00:11:02.730 "trsvcid": "4420" 00:11:02.730 }, 00:11:02.730 "peer_address": { 00:11:02.730 "trtype": "TCP", 00:11:02.730 "adrfam": "IPv4", 00:11:02.730 "traddr": "10.0.0.1", 00:11:02.730 "trsvcid": "38394" 00:11:02.730 }, 00:11:02.730 "auth": { 00:11:02.730 "state": "completed", 00:11:02.730 "digest": "sha512", 00:11:02.730 "dhgroup": "null" 00:11:02.730 } 00:11:02.730 } 00:11:02.730 ]' 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.730 10:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.989 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:02.989 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:03.558 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:03.817 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.818 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.818 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.818 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.818 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.818 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:03.818 10:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.077 00:11:04.077 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.077 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.077 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.336 { 00:11:04.336 "cntlid": 101, 00:11:04.336 "qid": 0, 00:11:04.336 "state": "enabled", 00:11:04.336 "thread": "nvmf_tgt_poll_group_000", 00:11:04.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:04.336 "listen_address": { 00:11:04.336 "trtype": "TCP", 00:11:04.336 "adrfam": "IPv4", 00:11:04.336 "traddr": "10.0.0.3", 00:11:04.336 "trsvcid": "4420" 00:11:04.336 }, 00:11:04.336 "peer_address": { 00:11:04.336 "trtype": "TCP", 00:11:04.336 "adrfam": "IPv4", 00:11:04.336 "traddr": "10.0.0.1", 00:11:04.336 "trsvcid": "37964" 00:11:04.336 }, 00:11:04.336 "auth": { 00:11:04.336 "state": "completed", 00:11:04.336 "digest": "sha512", 00:11:04.336 "dhgroup": "null" 00:11:04.336 } 00:11:04.336 } 00:11:04.336 ]' 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:04.336 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:04.596 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.596 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.596 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.596 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:04.596 10:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:05.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:05.164 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.424 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:05.684 00:11:05.684 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.684 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.684 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.943 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.943 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.943 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.943 10:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.943 { 00:11:05.943 "cntlid": 103, 00:11:05.943 "qid": 0, 00:11:05.943 "state": "enabled", 00:11:05.943 "thread": "nvmf_tgt_poll_group_000", 00:11:05.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:05.943 "listen_address": { 00:11:05.943 "trtype": "TCP", 00:11:05.943 "adrfam": "IPv4", 00:11:05.943 "traddr": "10.0.0.3", 00:11:05.943 "trsvcid": "4420" 00:11:05.943 }, 00:11:05.943 "peer_address": { 00:11:05.943 "trtype": "TCP", 00:11:05.943 "adrfam": "IPv4", 00:11:05.943 "traddr": "10.0.0.1", 00:11:05.943 "trsvcid": "37990" 00:11:05.943 }, 00:11:05.943 "auth": { 00:11:05.943 "state": "completed", 00:11:05.943 "digest": "sha512", 00:11:05.943 "dhgroup": "null" 00:11:05.943 } 00:11:05.943 } 00:11:05.943 ]' 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:05.943 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:06.202 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:06.202 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:06.202 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.202 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:06.202 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:06.770 10:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.030 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.289 00:11:07.289 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.289 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.289 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.549 { 00:11:07.549 "cntlid": 105, 00:11:07.549 "qid": 0, 00:11:07.549 "state": "enabled", 00:11:07.549 "thread": "nvmf_tgt_poll_group_000", 00:11:07.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:07.549 "listen_address": { 00:11:07.549 "trtype": "TCP", 00:11:07.549 "adrfam": "IPv4", 00:11:07.549 "traddr": "10.0.0.3", 00:11:07.549 "trsvcid": "4420" 00:11:07.549 }, 00:11:07.549 "peer_address": { 00:11:07.549 "trtype": "TCP", 00:11:07.549 "adrfam": "IPv4", 00:11:07.549 "traddr": "10.0.0.1", 00:11:07.549 "trsvcid": "38030" 00:11:07.549 }, 00:11:07.549 "auth": { 00:11:07.549 "state": "completed", 00:11:07.549 "digest": "sha512", 00:11:07.549 "dhgroup": "ffdhe2048" 00:11:07.549 } 00:11:07.549 } 00:11:07.549 ]' 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:07.549 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.808 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:07.808 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.808 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.808 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.808 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.067 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:08.068 10:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:08.326 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.585 10:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.844 00:11:08.844 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.844 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.844 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.103 { 00:11:09.103 "cntlid": 107, 00:11:09.103 "qid": 0, 00:11:09.103 "state": "enabled", 00:11:09.103 "thread": "nvmf_tgt_poll_group_000", 00:11:09.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:09.103 "listen_address": { 00:11:09.103 "trtype": "TCP", 00:11:09.103 "adrfam": "IPv4", 00:11:09.103 "traddr": "10.0.0.3", 00:11:09.103 "trsvcid": "4420" 00:11:09.103 }, 00:11:09.103 "peer_address": { 00:11:09.103 "trtype": "TCP", 00:11:09.103 "adrfam": "IPv4", 00:11:09.103 "traddr": "10.0.0.1", 00:11:09.103 "trsvcid": "38068" 00:11:09.103 }, 00:11:09.103 "auth": { 00:11:09.103 "state": "completed", 00:11:09.103 "digest": "sha512", 00:11:09.103 "dhgroup": "ffdhe2048" 00:11:09.103 } 00:11:09.103 } 00:11:09.103 ]' 00:11:09.103 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.362 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.621 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:09.621 10:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:10.189 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.190 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.449 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.708 { 00:11:10.708 "cntlid": 109, 00:11:10.708 "qid": 0, 00:11:10.708 "state": "enabled", 00:11:10.708 "thread": "nvmf_tgt_poll_group_000", 00:11:10.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:10.708 "listen_address": { 00:11:10.708 "trtype": "TCP", 00:11:10.708 "adrfam": "IPv4", 00:11:10.708 "traddr": "10.0.0.3", 00:11:10.708 "trsvcid": "4420" 00:11:10.708 }, 00:11:10.708 "peer_address": { 00:11:10.708 "trtype": "TCP", 00:11:10.708 "adrfam": "IPv4", 00:11:10.708 "traddr": "10.0.0.1", 00:11:10.708 "trsvcid": "38100" 00:11:10.708 }, 00:11:10.708 "auth": { 00:11:10.708 "state": "completed", 00:11:10.708 "digest": "sha512", 00:11:10.708 "dhgroup": "ffdhe2048" 00:11:10.708 } 00:11:10.708 } 00:11:10.708 ]' 00:11:10.708 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.969 10:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.228 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:11.228 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.818 10:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.077 00:11:12.077 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.077 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.077 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.336 { 00:11:12.336 "cntlid": 111, 00:11:12.336 "qid": 0, 00:11:12.336 "state": "enabled", 00:11:12.336 "thread": "nvmf_tgt_poll_group_000", 00:11:12.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:12.336 "listen_address": { 00:11:12.336 "trtype": "TCP", 00:11:12.336 "adrfam": "IPv4", 00:11:12.336 "traddr": "10.0.0.3", 00:11:12.336 "trsvcid": "4420" 00:11:12.336 }, 00:11:12.336 "peer_address": { 00:11:12.336 "trtype": "TCP", 00:11:12.336 "adrfam": "IPv4", 00:11:12.336 "traddr": "10.0.0.1", 00:11:12.336 "trsvcid": "38124" 00:11:12.336 }, 00:11:12.336 "auth": { 00:11:12.336 "state": "completed", 00:11:12.336 "digest": "sha512", 00:11:12.336 "dhgroup": "ffdhe2048" 00:11:12.336 } 00:11:12.336 } 00:11:12.336 ]' 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:12.336 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.595 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:12.595 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.595 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.595 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.595 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.854 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:12.854 10:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.423 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.683 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.683 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.683 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.683 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.942 00:11:13.942 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:13.942 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.942 10:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:13.942 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.202 { 00:11:14.202 "cntlid": 113, 00:11:14.202 "qid": 0, 00:11:14.202 "state": "enabled", 00:11:14.202 "thread": "nvmf_tgt_poll_group_000", 00:11:14.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:14.202 "listen_address": { 00:11:14.202 "trtype": "TCP", 00:11:14.202 "adrfam": "IPv4", 00:11:14.202 "traddr": "10.0.0.3", 00:11:14.202 "trsvcid": "4420" 00:11:14.202 }, 00:11:14.202 "peer_address": { 00:11:14.202 "trtype": "TCP", 00:11:14.202 "adrfam": "IPv4", 00:11:14.202 "traddr": "10.0.0.1", 00:11:14.202 "trsvcid": "38160" 00:11:14.202 }, 00:11:14.202 "auth": { 00:11:14.202 "state": "completed", 00:11:14.202 "digest": "sha512", 00:11:14.202 "dhgroup": "ffdhe3072" 00:11:14.202 } 00:11:14.202 } 00:11:14.202 ]' 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.202 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.461 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:14.461 10:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:15.029 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.288 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.548 00:11:15.548 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.548 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.548 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.806 { 00:11:15.806 "cntlid": 115, 00:11:15.806 "qid": 0, 00:11:15.806 "state": "enabled", 00:11:15.806 "thread": "nvmf_tgt_poll_group_000", 00:11:15.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:15.806 "listen_address": { 00:11:15.806 "trtype": "TCP", 00:11:15.806 "adrfam": "IPv4", 00:11:15.806 "traddr": "10.0.0.3", 00:11:15.806 "trsvcid": "4420" 00:11:15.806 }, 00:11:15.806 "peer_address": { 00:11:15.806 "trtype": "TCP", 00:11:15.806 "adrfam": "IPv4", 00:11:15.806 "traddr": "10.0.0.1", 00:11:15.806 "trsvcid": "55384" 00:11:15.806 }, 00:11:15.806 "auth": { 00:11:15.806 "state": "completed", 00:11:15.806 "digest": "sha512", 00:11:15.806 "dhgroup": "ffdhe3072" 00:11:15.806 } 00:11:15.806 } 00:11:15.806 ]' 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:15.806 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.065 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.065 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.065 10:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.065 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:16.066 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:16.634 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.894 10:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.153 00:11:17.153 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.153 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.153 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.412 { 00:11:17.412 "cntlid": 117, 00:11:17.412 "qid": 0, 00:11:17.412 "state": "enabled", 00:11:17.412 "thread": "nvmf_tgt_poll_group_000", 00:11:17.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:17.412 "listen_address": { 00:11:17.412 "trtype": "TCP", 00:11:17.412 "adrfam": "IPv4", 00:11:17.412 "traddr": "10.0.0.3", 00:11:17.412 "trsvcid": "4420" 00:11:17.412 }, 00:11:17.412 "peer_address": { 00:11:17.412 "trtype": "TCP", 00:11:17.412 "adrfam": "IPv4", 00:11:17.412 "traddr": "10.0.0.1", 00:11:17.412 "trsvcid": "55414" 00:11:17.412 }, 00:11:17.412 "auth": { 00:11:17.412 "state": "completed", 00:11:17.412 "digest": "sha512", 00:11:17.412 "dhgroup": "ffdhe3072" 00:11:17.412 } 00:11:17.412 } 00:11:17.412 ]' 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.412 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.671 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.671 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.671 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.671 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:17.671 10:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:18.239 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.498 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:18.756 00:11:18.756 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.756 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.756 10:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.015 { 00:11:19.015 "cntlid": 119, 00:11:19.015 "qid": 0, 00:11:19.015 "state": "enabled", 00:11:19.015 "thread": "nvmf_tgt_poll_group_000", 00:11:19.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:19.015 "listen_address": { 00:11:19.015 "trtype": "TCP", 00:11:19.015 "adrfam": "IPv4", 00:11:19.015 "traddr": "10.0.0.3", 00:11:19.015 "trsvcid": "4420" 00:11:19.015 }, 00:11:19.015 "peer_address": { 00:11:19.015 "trtype": "TCP", 00:11:19.015 "adrfam": "IPv4", 00:11:19.015 "traddr": "10.0.0.1", 00:11:19.015 "trsvcid": "55452" 00:11:19.015 }, 00:11:19.015 "auth": { 00:11:19.015 "state": "completed", 00:11:19.015 "digest": "sha512", 00:11:19.015 "dhgroup": "ffdhe3072" 00:11:19.015 } 00:11:19.015 } 00:11:19.015 ]' 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:19.015 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:19.275 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:19.843 10:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:19.843 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.103 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.363 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.622 { 00:11:20.622 "cntlid": 121, 00:11:20.622 "qid": 0, 00:11:20.622 "state": "enabled", 00:11:20.622 "thread": "nvmf_tgt_poll_group_000", 00:11:20.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:20.622 "listen_address": { 00:11:20.622 "trtype": "TCP", 00:11:20.622 "adrfam": "IPv4", 00:11:20.622 "traddr": "10.0.0.3", 00:11:20.622 "trsvcid": "4420" 00:11:20.622 }, 00:11:20.622 "peer_address": { 00:11:20.622 "trtype": "TCP", 00:11:20.622 "adrfam": "IPv4", 00:11:20.622 "traddr": "10.0.0.1", 00:11:20.622 "trsvcid": "55476" 00:11:20.622 }, 00:11:20.622 "auth": { 00:11:20.622 "state": "completed", 00:11:20.622 "digest": "sha512", 00:11:20.622 "dhgroup": "ffdhe4096" 00:11:20.622 } 00:11:20.622 } 00:11:20.622 ]' 00:11:20.622 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.882 10:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.140 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:21.140 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:21.706 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:21.965 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.966 10:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.225 00:11:22.225 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.225 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.225 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.484 { 00:11:22.484 "cntlid": 123, 00:11:22.484 "qid": 0, 00:11:22.484 "state": "enabled", 00:11:22.484 "thread": "nvmf_tgt_poll_group_000", 00:11:22.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:22.484 "listen_address": { 00:11:22.484 "trtype": "TCP", 00:11:22.484 "adrfam": "IPv4", 00:11:22.484 "traddr": "10.0.0.3", 00:11:22.484 "trsvcid": "4420" 00:11:22.484 }, 00:11:22.484 "peer_address": { 00:11:22.484 "trtype": "TCP", 00:11:22.484 "adrfam": "IPv4", 00:11:22.484 "traddr": "10.0.0.1", 00:11:22.484 "trsvcid": "55502" 00:11:22.484 }, 00:11:22.484 "auth": { 00:11:22.484 "state": "completed", 00:11:22.484 "digest": "sha512", 00:11:22.484 "dhgroup": "ffdhe4096" 00:11:22.484 } 00:11:22.484 } 00:11:22.484 ]' 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.484 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.742 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:22.742 10:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:23.309 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.568 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.829 00:11:23.829 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.829 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.829 10:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.088 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.088 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.088 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.088 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.088 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.088 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.088 { 00:11:24.088 "cntlid": 125, 00:11:24.088 "qid": 0, 00:11:24.088 "state": "enabled", 00:11:24.089 "thread": "nvmf_tgt_poll_group_000", 00:11:24.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:24.089 "listen_address": { 00:11:24.089 "trtype": "TCP", 00:11:24.089 "adrfam": "IPv4", 00:11:24.089 "traddr": "10.0.0.3", 00:11:24.089 "trsvcid": "4420" 00:11:24.089 }, 00:11:24.089 "peer_address": { 00:11:24.089 "trtype": "TCP", 00:11:24.089 "adrfam": "IPv4", 00:11:24.089 "traddr": "10.0.0.1", 00:11:24.089 "trsvcid": "55532" 00:11:24.089 }, 00:11:24.089 "auth": { 00:11:24.089 "state": "completed", 00:11:24.089 "digest": "sha512", 00:11:24.089 "dhgroup": "ffdhe4096" 00:11:24.089 } 00:11:24.089 } 00:11:24.089 ]' 00:11:24.089 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.089 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:24.089 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.089 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:24.089 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.349 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.349 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.349 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.349 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:24.349 10:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:24.918 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.177 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.436 00:11:25.695 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.695 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.696 { 00:11:25.696 "cntlid": 127, 00:11:25.696 "qid": 0, 00:11:25.696 "state": "enabled", 00:11:25.696 "thread": "nvmf_tgt_poll_group_000", 00:11:25.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:25.696 "listen_address": { 00:11:25.696 "trtype": "TCP", 00:11:25.696 "adrfam": "IPv4", 00:11:25.696 "traddr": "10.0.0.3", 00:11:25.696 "trsvcid": "4420" 00:11:25.696 }, 00:11:25.696 "peer_address": { 00:11:25.696 "trtype": "TCP", 00:11:25.696 "adrfam": "IPv4", 00:11:25.696 "traddr": "10.0.0.1", 00:11:25.696 "trsvcid": "56888" 00:11:25.696 }, 00:11:25.696 "auth": { 00:11:25.696 "state": "completed", 00:11:25.696 "digest": "sha512", 00:11:25.696 "dhgroup": "ffdhe4096" 00:11:25.696 } 00:11:25.696 } 00:11:25.696 ]' 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.696 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.955 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:25.955 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.955 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.955 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.955 10:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.216 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:26.216 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.829 10:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.396 00:11:27.396 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.396 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.396 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.653 { 00:11:27.653 "cntlid": 129, 00:11:27.653 "qid": 0, 00:11:27.653 "state": "enabled", 00:11:27.653 "thread": "nvmf_tgt_poll_group_000", 00:11:27.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:27.653 "listen_address": { 00:11:27.653 "trtype": "TCP", 00:11:27.653 "adrfam": "IPv4", 00:11:27.653 "traddr": "10.0.0.3", 00:11:27.653 "trsvcid": "4420" 00:11:27.653 }, 00:11:27.653 "peer_address": { 00:11:27.653 "trtype": "TCP", 00:11:27.653 "adrfam": "IPv4", 00:11:27.653 "traddr": "10.0.0.1", 00:11:27.653 "trsvcid": "56912" 00:11:27.653 }, 00:11:27.653 "auth": { 00:11:27.653 "state": "completed", 00:11:27.653 "digest": "sha512", 00:11:27.653 "dhgroup": "ffdhe6144" 00:11:27.653 } 00:11:27.653 } 00:11:27.653 ]' 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.653 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.912 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:27.912 10:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.478 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:28.479 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.737 10:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.996 00:11:28.996 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.996 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.996 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.254 { 00:11:29.254 "cntlid": 131, 00:11:29.254 "qid": 0, 00:11:29.254 "state": "enabled", 00:11:29.254 "thread": "nvmf_tgt_poll_group_000", 00:11:29.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:29.254 "listen_address": { 00:11:29.254 "trtype": "TCP", 00:11:29.254 "adrfam": "IPv4", 00:11:29.254 "traddr": "10.0.0.3", 00:11:29.254 "trsvcid": "4420" 00:11:29.254 }, 00:11:29.254 "peer_address": { 00:11:29.254 "trtype": "TCP", 00:11:29.254 "adrfam": "IPv4", 00:11:29.254 "traddr": "10.0.0.1", 00:11:29.254 "trsvcid": "56946" 00:11:29.254 }, 00:11:29.254 "auth": { 00:11:29.254 "state": "completed", 00:11:29.254 "digest": "sha512", 00:11:29.254 "dhgroup": "ffdhe6144" 00:11:29.254 } 00:11:29.254 } 00:11:29.254 ]' 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:29.254 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.512 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.512 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.512 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.512 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:29.512 10:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:30.077 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:30.078 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.336 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.902 00:11:30.902 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.902 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.902 10:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.902 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.902 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.902 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.902 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.903 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.903 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.903 { 00:11:30.903 "cntlid": 133, 00:11:30.903 "qid": 0, 00:11:30.903 "state": "enabled", 00:11:30.903 "thread": "nvmf_tgt_poll_group_000", 00:11:30.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:30.903 "listen_address": { 00:11:30.903 "trtype": "TCP", 00:11:30.903 "adrfam": "IPv4", 00:11:30.903 "traddr": "10.0.0.3", 00:11:30.903 "trsvcid": "4420" 00:11:30.903 }, 00:11:30.903 "peer_address": { 00:11:30.903 "trtype": "TCP", 00:11:30.903 "adrfam": "IPv4", 00:11:30.903 "traddr": "10.0.0.1", 00:11:30.903 "trsvcid": "56972" 00:11:30.903 }, 00:11:30.903 "auth": { 00:11:30.903 "state": "completed", 00:11:30.903 "digest": "sha512", 00:11:30.903 "dhgroup": "ffdhe6144" 00:11:30.903 } 00:11:30.903 } 00:11:30.903 ]' 00:11:30.903 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.161 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.419 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:31.419 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:31.984 10:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.241 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.499 00:11:32.499 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.499 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.499 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.757 { 00:11:32.757 "cntlid": 135, 00:11:32.757 "qid": 0, 00:11:32.757 "state": "enabled", 00:11:32.757 "thread": "nvmf_tgt_poll_group_000", 00:11:32.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:32.757 "listen_address": { 00:11:32.757 "trtype": "TCP", 00:11:32.757 "adrfam": "IPv4", 00:11:32.757 "traddr": "10.0.0.3", 00:11:32.757 "trsvcid": "4420" 00:11:32.757 }, 00:11:32.757 "peer_address": { 00:11:32.757 "trtype": "TCP", 00:11:32.757 "adrfam": "IPv4", 00:11:32.757 "traddr": "10.0.0.1", 00:11:32.757 "trsvcid": "56990" 00:11:32.757 }, 00:11:32.757 "auth": { 00:11:32.757 "state": "completed", 00:11:32.757 "digest": "sha512", 00:11:32.757 "dhgroup": "ffdhe6144" 00:11:32.757 } 00:11:32.757 } 00:11:32.757 ]' 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.757 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.758 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.758 10:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.016 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:33.016 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:33.581 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.840 10:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.407 00:11:34.407 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.407 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.407 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.666 { 00:11:34.666 "cntlid": 137, 00:11:34.666 "qid": 0, 00:11:34.666 "state": "enabled", 00:11:34.666 "thread": "nvmf_tgt_poll_group_000", 00:11:34.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:34.666 "listen_address": { 00:11:34.666 "trtype": "TCP", 00:11:34.666 "adrfam": "IPv4", 00:11:34.666 "traddr": "10.0.0.3", 00:11:34.666 "trsvcid": "4420" 00:11:34.666 }, 00:11:34.666 "peer_address": { 00:11:34.666 "trtype": "TCP", 00:11:34.666 "adrfam": "IPv4", 00:11:34.666 "traddr": "10.0.0.1", 00:11:34.666 "trsvcid": "43138" 00:11:34.666 }, 00:11:34.666 "auth": { 00:11:34.666 "state": "completed", 00:11:34.666 "digest": "sha512", 00:11:34.666 "dhgroup": "ffdhe8192" 00:11:34.666 } 00:11:34.666 } 00:11:34.666 ]' 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.666 10:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.924 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:34.924 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:35.491 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.798 10:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.366 00:11:36.366 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.366 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.367 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.625 { 00:11:36.625 "cntlid": 139, 00:11:36.625 "qid": 0, 00:11:36.625 "state": "enabled", 00:11:36.625 "thread": "nvmf_tgt_poll_group_000", 00:11:36.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:36.625 "listen_address": { 00:11:36.625 "trtype": "TCP", 00:11:36.625 "adrfam": "IPv4", 00:11:36.625 "traddr": "10.0.0.3", 00:11:36.625 "trsvcid": "4420" 00:11:36.625 }, 00:11:36.625 "peer_address": { 00:11:36.625 "trtype": "TCP", 00:11:36.625 "adrfam": "IPv4", 00:11:36.625 "traddr": "10.0.0.1", 00:11:36.625 "trsvcid": "43180" 00:11:36.625 }, 00:11:36.625 "auth": { 00:11:36.625 "state": "completed", 00:11:36.625 "digest": "sha512", 00:11:36.625 "dhgroup": "ffdhe8192" 00:11:36.625 } 00:11:36.625 } 00:11:36.625 ]' 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.625 10:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.883 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:36.883 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: --dhchap-ctrl-secret DHHC-1:02:OWFhZDQ2NDljYjhiZGZmNzY2N2Q2NDY3YTVmNmIyNjE0OGU0YzcxMTdiMTBkOTgxOfFb0Q==: 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:37.450 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.709 10:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.276 00:11:38.276 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.276 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.276 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.534 { 00:11:38.534 "cntlid": 141, 00:11:38.534 "qid": 0, 00:11:38.534 "state": "enabled", 00:11:38.534 "thread": "nvmf_tgt_poll_group_000", 00:11:38.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:38.534 "listen_address": { 00:11:38.534 "trtype": "TCP", 00:11:38.534 "adrfam": "IPv4", 00:11:38.534 "traddr": "10.0.0.3", 00:11:38.534 "trsvcid": "4420" 00:11:38.534 }, 00:11:38.534 "peer_address": { 00:11:38.534 "trtype": "TCP", 00:11:38.534 "adrfam": "IPv4", 00:11:38.534 "traddr": "10.0.0.1", 00:11:38.534 "trsvcid": "43202" 00:11:38.534 }, 00:11:38.534 "auth": { 00:11:38.534 "state": "completed", 00:11:38.534 "digest": "sha512", 00:11:38.534 "dhgroup": "ffdhe8192" 00:11:38.534 } 00:11:38.534 } 00:11:38.534 ]' 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:38.534 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.793 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:38.793 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.793 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.793 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.793 10:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.051 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:39.051 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:01:NGQ5MWE1NWFhZGYxY2Y3ODNlZjNiYmI1MzE4MTUzOTEu34zo: 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:39.617 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.875 10:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.442 00:11:40.442 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.442 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.442 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.701 { 00:11:40.701 "cntlid": 143, 00:11:40.701 "qid": 0, 00:11:40.701 "state": "enabled", 00:11:40.701 "thread": "nvmf_tgt_poll_group_000", 00:11:40.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:40.701 "listen_address": { 00:11:40.701 "trtype": "TCP", 00:11:40.701 "adrfam": "IPv4", 00:11:40.701 "traddr": "10.0.0.3", 00:11:40.701 "trsvcid": "4420" 00:11:40.701 }, 00:11:40.701 "peer_address": { 00:11:40.701 "trtype": "TCP", 00:11:40.701 "adrfam": "IPv4", 00:11:40.701 "traddr": "10.0.0.1", 00:11:40.701 "trsvcid": "43220" 00:11:40.701 }, 00:11:40.701 "auth": { 00:11:40.701 "state": "completed", 00:11:40.701 "digest": "sha512", 00:11:40.701 "dhgroup": "ffdhe8192" 00:11:40.701 } 00:11:40.701 } 00:11:40.701 ]' 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.701 10:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.959 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:40.959 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:41.526 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.785 10:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.355 00:11:42.355 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.355 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.355 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.613 { 00:11:42.613 "cntlid": 145, 00:11:42.613 "qid": 0, 00:11:42.613 "state": "enabled", 00:11:42.613 "thread": "nvmf_tgt_poll_group_000", 00:11:42.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:42.613 "listen_address": { 00:11:42.613 "trtype": "TCP", 00:11:42.613 "adrfam": "IPv4", 00:11:42.613 "traddr": "10.0.0.3", 00:11:42.613 "trsvcid": "4420" 00:11:42.613 }, 00:11:42.613 "peer_address": { 00:11:42.613 "trtype": "TCP", 00:11:42.613 "adrfam": "IPv4", 00:11:42.613 "traddr": "10.0.0.1", 00:11:42.613 "trsvcid": "43250" 00:11:42.613 }, 00:11:42.613 "auth": { 00:11:42.613 "state": "completed", 00:11:42.613 "digest": "sha512", 00:11:42.613 "dhgroup": "ffdhe8192" 00:11:42.613 } 00:11:42.613 } 00:11:42.613 ]' 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:42.613 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.871 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:42.871 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.872 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.872 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.872 10:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.130 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:43.130 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:00:ZGQ1ZWYwMTU2MTAwYWIxNDJlMDBlNWI5Mjc5NDEwMjAxYzA5MzhmZDA2NDM5NGQwk77ipA==: --dhchap-ctrl-secret DHHC-1:03:ZmYyMWMzYTI2ZTA5MmZmY2QwYWIwZjk5Yzk3NmU5ODc1NmYwOWQ1M2JjMGIwYzBhYTYyNDRjMmI5MzE2OTYxY1ctcoM=: 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:43.697 10:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:11:44.266 request: 00:11:44.266 { 00:11:44.266 "name": "nvme0", 00:11:44.266 "trtype": "tcp", 00:11:44.266 "traddr": "10.0.0.3", 00:11:44.266 "adrfam": "ipv4", 00:11:44.266 "trsvcid": "4420", 00:11:44.266 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:44.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:44.266 "prchk_reftag": false, 00:11:44.266 "prchk_guard": false, 00:11:44.266 "hdgst": false, 00:11:44.266 "ddgst": false, 00:11:44.266 "dhchap_key": "key2", 00:11:44.266 "allow_unrecognized_csi": false, 00:11:44.266 "method": "bdev_nvme_attach_controller", 00:11:44.266 "req_id": 1 00:11:44.266 } 00:11:44.266 Got JSON-RPC error response 00:11:44.266 response: 00:11:44.266 { 00:11:44.266 "code": -5, 00:11:44.266 "message": "Input/output error" 00:11:44.266 } 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:44.266 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:11:44.832 request: 00:11:44.832 { 00:11:44.832 "name": "nvme0", 00:11:44.832 "trtype": "tcp", 00:11:44.832 "traddr": "10.0.0.3", 00:11:44.832 "adrfam": "ipv4", 00:11:44.832 "trsvcid": "4420", 00:11:44.832 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:44.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:44.832 "prchk_reftag": false, 00:11:44.832 "prchk_guard": false, 00:11:44.832 "hdgst": false, 00:11:44.832 "ddgst": false, 00:11:44.832 "dhchap_key": "key1", 00:11:44.832 "dhchap_ctrlr_key": "ckey2", 00:11:44.832 "allow_unrecognized_csi": false, 00:11:44.832 "method": "bdev_nvme_attach_controller", 00:11:44.832 "req_id": 1 00:11:44.832 } 00:11:44.832 Got JSON-RPC error response 00:11:44.832 response: 00:11:44.832 { 00:11:44.832 "code": -5, 00:11:44.832 "message": "Input/output error" 00:11:44.832 } 00:11:44.832 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:44.832 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:44.832 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.833 10:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.091 request: 00:11:45.091 { 00:11:45.091 "name": "nvme0", 00:11:45.091 "trtype": "tcp", 00:11:45.091 "traddr": "10.0.0.3", 00:11:45.091 "adrfam": "ipv4", 00:11:45.091 "trsvcid": "4420", 00:11:45.091 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:45.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:45.091 "prchk_reftag": false, 00:11:45.091 "prchk_guard": false, 00:11:45.091 "hdgst": false, 00:11:45.091 "ddgst": false, 00:11:45.091 "dhchap_key": "key1", 00:11:45.091 "dhchap_ctrlr_key": "ckey1", 00:11:45.091 "allow_unrecognized_csi": false, 00:11:45.091 "method": "bdev_nvme_attach_controller", 00:11:45.091 "req_id": 1 00:11:45.091 } 00:11:45.091 Got JSON-RPC error response 00:11:45.091 response: 00:11:45.091 { 00:11:45.091 "code": -5, 00:11:45.091 "message": "Input/output error" 00:11:45.091 } 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67466 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67466 ']' 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67466 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67466 00:11:45.350 killing process with pid 67466 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67466' 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67466 00:11:45.350 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67466 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70243 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70243 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70243 ']' 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.609 10:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:46.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70243 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70243 ']' 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.547 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.805 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.805 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:46.805 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 null0 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bpe 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.28L ]] 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.28L 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.806 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.B4a 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.w45 ]] 00:11:47.064 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.w45 00:11:47.065 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.065 10:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.DBA 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.KYW ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.KYW 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.hBK 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.065 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.005 nvme0n1 00:11:48.005 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.005 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.005 10:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.005 { 00:11:48.005 "cntlid": 1, 00:11:48.005 "qid": 0, 00:11:48.005 "state": "enabled", 00:11:48.005 "thread": "nvmf_tgt_poll_group_000", 00:11:48.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:48.005 "listen_address": { 00:11:48.005 "trtype": "TCP", 00:11:48.005 "adrfam": "IPv4", 00:11:48.005 "traddr": "10.0.0.3", 00:11:48.005 "trsvcid": "4420" 00:11:48.005 }, 00:11:48.005 "peer_address": { 00:11:48.005 "trtype": "TCP", 00:11:48.005 "adrfam": "IPv4", 00:11:48.005 "traddr": "10.0.0.1", 00:11:48.005 "trsvcid": "56246" 00:11:48.005 }, 00:11:48.005 "auth": { 00:11:48.005 "state": "completed", 00:11:48.005 "digest": "sha512", 00:11:48.005 "dhgroup": "ffdhe8192" 00:11:48.005 } 00:11:48.005 } 00:11:48.005 ]' 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.005 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.270 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:48.270 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.270 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.270 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.270 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.529 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:48.529 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:49.097 10:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key3 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:49.097 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.098 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.357 request: 00:11:49.357 { 00:11:49.357 "name": "nvme0", 00:11:49.357 "trtype": "tcp", 00:11:49.357 "traddr": "10.0.0.3", 00:11:49.357 "adrfam": "ipv4", 00:11:49.357 "trsvcid": "4420", 00:11:49.357 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:49.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:49.357 "prchk_reftag": false, 00:11:49.357 "prchk_guard": false, 00:11:49.357 "hdgst": false, 00:11:49.357 "ddgst": false, 00:11:49.357 "dhchap_key": "key3", 00:11:49.357 "allow_unrecognized_csi": false, 00:11:49.357 "method": "bdev_nvme_attach_controller", 00:11:49.357 "req_id": 1 00:11:49.357 } 00:11:49.357 Got JSON-RPC error response 00:11:49.357 response: 00:11:49.357 { 00:11:49.357 "code": -5, 00:11:49.357 "message": "Input/output error" 00:11:49.357 } 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:49.357 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.616 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.873 request: 00:11:49.873 { 00:11:49.873 "name": "nvme0", 00:11:49.873 "trtype": "tcp", 00:11:49.873 "traddr": "10.0.0.3", 00:11:49.873 "adrfam": "ipv4", 00:11:49.873 "trsvcid": "4420", 00:11:49.873 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:49.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:49.873 "prchk_reftag": false, 00:11:49.873 "prchk_guard": false, 00:11:49.873 "hdgst": false, 00:11:49.873 "ddgst": false, 00:11:49.873 "dhchap_key": "key3", 00:11:49.873 "allow_unrecognized_csi": false, 00:11:49.873 "method": "bdev_nvme_attach_controller", 00:11:49.873 "req_id": 1 00:11:49.873 } 00:11:49.873 Got JSON-RPC error response 00:11:49.873 response: 00:11:49.873 { 00:11:49.873 "code": -5, 00:11:49.873 "message": "Input/output error" 00:11:49.873 } 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:11:49.873 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:11:49.874 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:49.874 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:49.874 10:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.131 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:50.132 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:50.132 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:11:50.390 request: 00:11:50.390 { 00:11:50.390 "name": "nvme0", 00:11:50.390 "trtype": "tcp", 00:11:50.390 "traddr": "10.0.0.3", 00:11:50.390 "adrfam": "ipv4", 00:11:50.390 "trsvcid": "4420", 00:11:50.390 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:50.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:50.390 "prchk_reftag": false, 00:11:50.390 "prchk_guard": false, 00:11:50.390 "hdgst": false, 00:11:50.390 "ddgst": false, 00:11:50.390 "dhchap_key": "key0", 00:11:50.390 "dhchap_ctrlr_key": "key1", 00:11:50.390 "allow_unrecognized_csi": false, 00:11:50.390 "method": "bdev_nvme_attach_controller", 00:11:50.390 "req_id": 1 00:11:50.390 } 00:11:50.390 Got JSON-RPC error response 00:11:50.390 response: 00:11:50.390 { 00:11:50.390 "code": -5, 00:11:50.390 "message": "Input/output error" 00:11:50.390 } 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:50.390 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:11:50.648 nvme0n1 00:11:50.648 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:11:50.648 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.648 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:11:50.905 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.905 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.905 10:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:51.163 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:52.098 nvme0n1 00:11:52.098 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:11:52.098 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:11:52.098 10:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.098 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:11:52.358 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.358 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:52.358 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid 0813c78c-bf40-477e-b94d-3900e5d9beb7 -l 0 --dhchap-secret DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: --dhchap-ctrl-secret DHHC-1:03:M2U4NWIxNTM1NmQ4YTBjMTdiMzRmZDEwNTE2OTE3OTA4ZDBhZTg3NzZkMTcwYTZhNDc1MjExMDZkMWE2ODVkMpar+Os=: 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.927 10:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:53.186 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:11:53.753 request: 00:11:53.753 { 00:11:53.753 "name": "nvme0", 00:11:53.753 "trtype": "tcp", 00:11:53.753 "traddr": "10.0.0.3", 00:11:53.754 "adrfam": "ipv4", 00:11:53.754 "trsvcid": "4420", 00:11:53.754 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:11:53.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7", 00:11:53.754 "prchk_reftag": false, 00:11:53.754 "prchk_guard": false, 00:11:53.754 "hdgst": false, 00:11:53.754 "ddgst": false, 00:11:53.754 "dhchap_key": "key1", 00:11:53.754 "allow_unrecognized_csi": false, 00:11:53.754 "method": "bdev_nvme_attach_controller", 00:11:53.754 "req_id": 1 00:11:53.754 } 00:11:53.754 Got JSON-RPC error response 00:11:53.754 response: 00:11:53.754 { 00:11:53.754 "code": -5, 00:11:53.754 "message": "Input/output error" 00:11:53.754 } 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:53.754 10:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:11:54.321 nvme0n1 00:11:54.579 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:11:54.579 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.579 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:11:54.579 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.579 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.579 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:54.839 10:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:11:55.100 nvme0n1 00:11:55.100 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:11:55.100 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.100 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:11:55.361 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.361 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.361 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key key3 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: '' 2s 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: ]] 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjE4ZTIyODNkYTY0NWJlODY4MzM2MjNkZTcwMzlkODVfUGeT: 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:55.620 10:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key1 --dhchap-ctrlr-key key2 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: 2s 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: ]] 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGNiNzdlODVkODY1MDIwMzI5ZDU3ZGYyYzM1MTJiMTIwOWFhN2E0ZGU5Y2RhYjc44n7SLA==: 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:11:57.562 10:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:00.102 10:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:00.671 nvme0n1 00:12:00.671 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:00.671 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.671 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.671 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.671 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:00.671 10:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:01.240 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:01.499 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:01.499 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:01.499 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:01.759 10:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:02.330 request: 00:12:02.330 { 00:12:02.330 "name": "nvme0", 00:12:02.330 "dhchap_key": "key1", 00:12:02.330 "dhchap_ctrlr_key": "key3", 00:12:02.330 "method": "bdev_nvme_set_keys", 00:12:02.330 "req_id": 1 00:12:02.330 } 00:12:02.330 Got JSON-RPC error response 00:12:02.330 response: 00:12:02.330 { 00:12:02.330 "code": -13, 00:12:02.330 "message": "Permission denied" 00:12:02.330 } 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.330 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:02.589 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:02.589 10:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:03.527 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:03.527 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:03.527 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:03.786 10:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:04.724 nvme0n1 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:04.724 10:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:04.983 request: 00:12:04.983 { 00:12:04.983 "name": "nvme0", 00:12:04.983 "dhchap_key": "key2", 00:12:04.983 "dhchap_ctrlr_key": "key0", 00:12:04.983 "method": "bdev_nvme_set_keys", 00:12:04.983 "req_id": 1 00:12:04.983 } 00:12:04.983 Got JSON-RPC error response 00:12:04.983 response: 00:12:04.983 { 00:12:04.983 "code": -13, 00:12:04.983 "message": "Permission denied" 00:12:04.983 } 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.983 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:05.242 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:05.242 10:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:06.180 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:06.180 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.180 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67502 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67502 ']' 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67502 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67502 00:12:06.439 killing process with pid 67502 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67502' 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67502 00:12:06.439 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67502 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.008 10:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.008 rmmod nvme_tcp 00:12:07.008 rmmod nvme_fabrics 00:12:07.008 rmmod nvme_keyring 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70243 ']' 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70243 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70243 ']' 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70243 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70243 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.008 killing process with pid 70243 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70243' 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70243 00:12:07.008 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70243 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:07.268 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.bpe /tmp/spdk.key-sha256.B4a /tmp/spdk.key-sha384.DBA /tmp/spdk.key-sha512.hBK /tmp/spdk.key-sha512.28L /tmp/spdk.key-sha384.w45 /tmp/spdk.key-sha256.KYW '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:07.527 00:12:07.527 real 2m38.889s 00:12:07.527 user 6m12.407s 00:12:07.527 sys 0m26.778s 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.527 ************************************ 00:12:07.527 END TEST nvmf_auth_target 00:12:07.527 ************************************ 00:12:07.527 10:55:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # [[ tcp == \t\c\p ]] 00:12:07.528 10:55:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:07.528 10:55:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.528 10:55:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.528 10:55:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.528 ************************************ 00:12:07.528 START TEST nvmf_bdevio_no_huge 00:12:07.528 ************************************ 00:12:07.528 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:07.787 * Looking for test storage... 00:12:07.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.787 --rc genhtml_branch_coverage=1 00:12:07.787 --rc genhtml_function_coverage=1 00:12:07.787 --rc genhtml_legend=1 00:12:07.787 --rc geninfo_all_blocks=1 00:12:07.787 --rc geninfo_unexecuted_blocks=1 00:12:07.787 00:12:07.787 ' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.787 --rc genhtml_branch_coverage=1 00:12:07.787 --rc genhtml_function_coverage=1 00:12:07.787 --rc genhtml_legend=1 00:12:07.787 --rc geninfo_all_blocks=1 00:12:07.787 --rc geninfo_unexecuted_blocks=1 00:12:07.787 00:12:07.787 ' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.787 --rc genhtml_branch_coverage=1 00:12:07.787 --rc genhtml_function_coverage=1 00:12:07.787 --rc genhtml_legend=1 00:12:07.787 --rc geninfo_all_blocks=1 00:12:07.787 --rc geninfo_unexecuted_blocks=1 00:12:07.787 00:12:07.787 ' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.787 --rc genhtml_branch_coverage=1 00:12:07.787 --rc genhtml_function_coverage=1 00:12:07.787 --rc genhtml_legend=1 00:12:07.787 --rc geninfo_all_blocks=1 00:12:07.787 --rc geninfo_unexecuted_blocks=1 00:12:07.787 00:12:07.787 ' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.787 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.788 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:07.788 Cannot find device "nvmf_init_br" 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:07.788 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:08.047 Cannot find device "nvmf_init_br2" 00:12:08.047 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:08.047 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:08.047 Cannot find device "nvmf_tgt_br" 00:12:08.047 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:08.047 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.047 Cannot find device "nvmf_tgt_br2" 00:12:08.047 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:08.047 10:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:08.047 Cannot find device "nvmf_init_br" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:08.047 Cannot find device "nvmf_init_br2" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:08.047 Cannot find device "nvmf_tgt_br" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:08.047 Cannot find device "nvmf_tgt_br2" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:08.047 Cannot find device "nvmf_br" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:08.047 Cannot find device "nvmf_init_if" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:08.047 Cannot find device "nvmf_init_if2" 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:08.047 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:08.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:08.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.124 ms 00:12:08.306 00:12:08.306 --- 10.0.0.3 ping statistics --- 00:12:08.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.306 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:08.306 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:08.306 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:08.306 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:12:08.306 00:12:08.306 --- 10.0.0.4 ping statistics --- 00:12:08.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.307 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:08.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:12:08.307 00:12:08.307 --- 10.0.0.1 ping statistics --- 00:12:08.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.307 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:08.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:08.307 00:12:08.307 --- 10.0.0.2 ping statistics --- 00:12:08.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.307 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70864 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70864 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 70864 ']' 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.307 10:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:08.307 [2024-12-09 10:55:01.447783] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:08.307 [2024-12-09 10:55:01.447842] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:08.565 [2024-12-09 10:55:01.726287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.823 [2024-12-09 10:55:01.775003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.823 [2024-12-09 10:55:01.775053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.823 [2024-12-09 10:55:01.775075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.823 [2024-12-09 10:55:01.775081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.823 [2024-12-09 10:55:01.775085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.823 [2024-12-09 10:55:01.775611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:08.823 [2024-12-09 10:55:01.775719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:08.823 [2024-12-09 10:55:01.776380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:08.823 [2024-12-09 10:55:01.776382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.823 [2024-12-09 10:55:01.780819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 [2024-12-09 10:55:02.376625] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 Malloc0 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:09.388 [2024-12-09 10:55:02.417891] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:09.388 { 00:12:09.388 "params": { 00:12:09.388 "name": "Nvme$subsystem", 00:12:09.388 "trtype": "$TEST_TRANSPORT", 00:12:09.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:09.388 "adrfam": "ipv4", 00:12:09.388 "trsvcid": "$NVMF_PORT", 00:12:09.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:09.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:09.388 "hdgst": ${hdgst:-false}, 00:12:09.388 "ddgst": ${ddgst:-false} 00:12:09.388 }, 00:12:09.388 "method": "bdev_nvme_attach_controller" 00:12:09.388 } 00:12:09.388 EOF 00:12:09.388 )") 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:09.388 10:55:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:09.388 "params": { 00:12:09.388 "name": "Nvme1", 00:12:09.388 "trtype": "tcp", 00:12:09.388 "traddr": "10.0.0.3", 00:12:09.388 "adrfam": "ipv4", 00:12:09.388 "trsvcid": "4420", 00:12:09.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.388 "hdgst": false, 00:12:09.388 "ddgst": false 00:12:09.388 }, 00:12:09.388 "method": "bdev_nvme_attach_controller" 00:12:09.388 }' 00:12:09.388 [2024-12-09 10:55:02.475896] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:09.388 [2024-12-09 10:55:02.475961] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70899 ] 00:12:09.645 [2024-12-09 10:55:02.750057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:09.645 [2024-12-09 10:55:02.799407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.645 [2024-12-09 10:55:02.799630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.646 [2024-12-09 10:55:02.799633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.646 [2024-12-09 10:55:02.812125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:09.904 I/O targets: 00:12:09.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:09.904 00:12:09.904 00:12:09.904 CUnit - A unit testing framework for C - Version 2.1-3 00:12:09.904 http://cunit.sourceforge.net/ 00:12:09.904 00:12:09.904 00:12:09.904 Suite: bdevio tests on: Nvme1n1 00:12:09.904 Test: blockdev write read block ...passed 00:12:09.904 Test: blockdev write zeroes read block ...passed 00:12:09.904 Test: blockdev write zeroes read no split ...passed 00:12:09.904 Test: blockdev write zeroes read split ...passed 00:12:09.904 Test: blockdev write zeroes read split partial ...passed 00:12:09.904 Test: blockdev reset ...[2024-12-09 10:55:03.001006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:09.904 [2024-12-09 10:55:03.001082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0e90 (9): Bad file descriptor 00:12:09.904 passed 00:12:09.904 Test: blockdev write read 8 blocks ...[2024-12-09 10:55:03.016076] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:09.904 passed 00:12:09.904 Test: blockdev write read size > 128k ...passed 00:12:09.904 Test: blockdev write read invalid size ...passed 00:12:09.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:09.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:09.904 Test: blockdev write read max offset ...passed 00:12:09.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:09.904 Test: blockdev writev readv 8 blocks ...passed 00:12:09.904 Test: blockdev writev readv 30 x 1block ...passed 00:12:09.904 Test: blockdev writev readv block ...passed 00:12:09.904 Test: blockdev writev readv size > 128k ...passed 00:12:09.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:09.904 Test: blockdev comparev and writev ...[2024-12-09 10:55:03.022272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.022306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.022320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.022326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.022583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.022619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.022840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.022857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.022868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.022874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.023083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.023100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.023111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:09.904 [2024-12-09 10:55:03.023118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:09.904 passed 00:12:09.904 Test: blockdev nvme passthru rw ...passed 00:12:09.904 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:55:03.023712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.904 [2024-12-09 10:55:03.023732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.023820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.904 [2024-12-09 10:55:03.023835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.023918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.904 [2024-12-09 10:55:03.023932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:09.904 [2024-12-09 10:55:03.024013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:09.904 [2024-12-09 10:55:03.024045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:09.904 passed 00:12:09.904 Test: blockdev nvme admin passthru ...passed 00:12:09.904 Test: blockdev copy ...passed 00:12:09.904 00:12:09.904 Run Summary: Type Total Ran Passed Failed Inactive 00:12:09.904 suites 1 1 n/a 0 0 00:12:09.904 tests 23 23 23 0 0 00:12:09.904 asserts 152 152 152 0 n/a 00:12:09.904 00:12:09.904 Elapsed time = 0.150 seconds 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:10.470 rmmod nvme_tcp 00:12:10.470 rmmod nvme_fabrics 00:12:10.470 rmmod nvme_keyring 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70864 ']' 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70864 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 70864 ']' 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 70864 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70864 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:10.470 killing process with pid 70864 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70864' 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 70864 00:12:10.470 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 70864 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:11.053 10:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.053 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:11.312 00:12:11.312 real 0m3.578s 00:12:11.312 user 0m9.959s 00:12:11.312 sys 0m1.303s 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:11.312 ************************************ 00:12:11.312 END TEST nvmf_bdevio_no_huge 00:12:11.312 ************************************ 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # '[' tcp = tcp ']' 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.312 ************************************ 00:12:11.312 START TEST nvmf_tls 00:12:11.312 ************************************ 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:11.312 * Looking for test storage... 00:12:11.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:12:11.312 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:11.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.572 --rc genhtml_branch_coverage=1 00:12:11.572 --rc genhtml_function_coverage=1 00:12:11.572 --rc genhtml_legend=1 00:12:11.572 --rc geninfo_all_blocks=1 00:12:11.572 --rc geninfo_unexecuted_blocks=1 00:12:11.572 00:12:11.572 ' 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:11.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.572 --rc genhtml_branch_coverage=1 00:12:11.572 --rc genhtml_function_coverage=1 00:12:11.572 --rc genhtml_legend=1 00:12:11.572 --rc geninfo_all_blocks=1 00:12:11.572 --rc geninfo_unexecuted_blocks=1 00:12:11.572 00:12:11.572 ' 00:12:11.572 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:11.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.572 --rc genhtml_branch_coverage=1 00:12:11.572 --rc genhtml_function_coverage=1 00:12:11.572 --rc genhtml_legend=1 00:12:11.572 --rc geninfo_all_blocks=1 00:12:11.572 --rc geninfo_unexecuted_blocks=1 00:12:11.573 00:12:11.573 ' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:11.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.573 --rc genhtml_branch_coverage=1 00:12:11.573 --rc genhtml_function_coverage=1 00:12:11.573 --rc genhtml_legend=1 00:12:11.573 --rc geninfo_all_blocks=1 00:12:11.573 --rc geninfo_unexecuted_blocks=1 00:12:11.573 00:12:11.573 ' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.573 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:11.573 Cannot find device "nvmf_init_br" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:11.573 Cannot find device "nvmf_init_br2" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:11.573 Cannot find device "nvmf_tgt_br" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:11.573 Cannot find device "nvmf_tgt_br2" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:11.573 Cannot find device "nvmf_init_br" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:11.573 Cannot find device "nvmf_init_br2" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:11.573 Cannot find device "nvmf_tgt_br" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:11.573 Cannot find device "nvmf_tgt_br2" 00:12:11.573 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:11.574 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:11.574 Cannot find device "nvmf_br" 00:12:11.574 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:11.574 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:11.574 Cannot find device "nvmf_init_if" 00:12:11.574 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:11.574 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:11.834 Cannot find device "nvmf_init_if2" 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:11.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:11.834 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:11.834 10:55:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:11.834 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:11.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:11.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:12:11.834 00:12:11.834 --- 10.0.0.3 ping statistics --- 00:12:11.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.834 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:12.100 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:12.100 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:12:12.100 00:12:12.100 --- 10.0.0.4 ping statistics --- 00:12:12.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.100 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:12.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:12.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:12:12.100 00:12:12.100 --- 10.0.0.1 ping statistics --- 00:12:12.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.100 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:12.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:12.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:12:12.100 00:12:12.100 --- 10.0.0.2 ping statistics --- 00:12:12.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:12.100 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71130 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71130 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71130 ']' 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:12.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:12.100 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:12.100 [2024-12-09 10:55:05.131920] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:12.100 [2024-12-09 10:55:05.131983] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.366 [2024-12-09 10:55:05.283299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.366 [2024-12-09 10:55:05.331620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.366 [2024-12-09 10:55:05.331666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.366 [2024-12-09 10:55:05.331672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.366 [2024-12-09 10:55:05.331677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.366 [2024-12-09 10:55:05.331681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.366 [2024-12-09 10:55:05.331957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.934 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.934 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:12.934 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.934 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.934 10:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:12.934 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.934 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:12.934 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:13.192 true 00:12:13.192 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:13.192 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:13.451 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:13.451 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:13.451 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:13.709 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:13.709 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:13.709 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:13.709 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:13.709 10:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:13.967 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:13.967 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:14.226 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:14.226 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:14.226 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:14.226 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:14.485 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:14.485 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:14.485 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:14.745 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:14.745 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:14.745 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:14.745 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:14.745 10:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:15.004 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:15.004 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Xp689IywgR 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.sxsC766G4v 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Xp689IywgR 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.sxsC766G4v 00:12:15.263 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:15.522 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:15.782 [2024-12-09 10:55:08.873855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:15.782 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Xp689IywgR 00:12:15.782 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Xp689IywgR 00:12:15.782 10:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:16.041 [2024-12-09 10:55:09.111630] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.041 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:16.300 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:16.559 [2024-12-09 10:55:09.506905] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:16.559 [2024-12-09 10:55:09.507210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:16.559 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:16.559 malloc0 00:12:16.559 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:16.817 10:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Xp689IywgR 00:12:17.075 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:17.334 10:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Xp689IywgR 00:12:27.316 Initializing NVMe Controllers 00:12:27.316 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.316 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:27.316 Initialization complete. Launching workers. 00:12:27.316 ======================================================== 00:12:27.316 Latency(us) 00:12:27.316 Device Information : IOPS MiB/s Average min max 00:12:27.316 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15987.49 62.45 4003.54 787.06 5197.62 00:12:27.316 ======================================================== 00:12:27.316 Total : 15987.49 62.45 4003.54 787.06 5197.62 00:12:27.316 00:12:27.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:27.575 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xp689IywgR 00:12:27.575 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:27.575 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Xp689IywgR 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71363 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71363 /var/tmp/bdevperf.sock 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71363 ']' 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:27.576 10:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:27.576 [2024-12-09 10:55:20.558482] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:27.576 [2024-12-09 10:55:20.558658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71363 ] 00:12:27.576 [2024-12-09 10:55:20.707307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.834 [2024-12-09 10:55:20.759062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.834 [2024-12-09 10:55:20.799645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:28.401 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.401 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:28.401 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Xp689IywgR 00:12:28.659 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:28.659 [2024-12-09 10:55:21.804223] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:28.917 TLSTESTn1 00:12:28.917 10:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:28.917 Running I/O for 10 seconds... 00:12:31.231 6179.00 IOPS, 24.14 MiB/s [2024-12-09T10:55:25.354Z] 6181.00 IOPS, 24.14 MiB/s [2024-12-09T10:55:26.307Z] 6176.33 IOPS, 24.13 MiB/s [2024-12-09T10:55:27.261Z] 6270.75 IOPS, 24.50 MiB/s [2024-12-09T10:55:28.197Z] 6380.80 IOPS, 24.93 MiB/s [2024-12-09T10:55:29.136Z] 6434.83 IOPS, 25.14 MiB/s [2024-12-09T10:55:30.074Z] 6462.43 IOPS, 25.24 MiB/s [2024-12-09T10:55:31.012Z] 6478.00 IOPS, 25.30 MiB/s [2024-12-09T10:55:32.392Z] 6488.33 IOPS, 25.35 MiB/s [2024-12-09T10:55:32.392Z] 6490.60 IOPS, 25.35 MiB/s 00:12:39.213 Latency(us) 00:12:39.213 [2024-12-09T10:55:32.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.213 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:39.213 Verification LBA range: start 0x0 length 0x2000 00:12:39.213 TLSTESTn1 : 10.01 6496.69 25.38 0.00 0.00 19671.48 3634.53 15453.90 00:12:39.213 [2024-12-09T10:55:32.392Z] =================================================================================================================== 00:12:39.213 [2024-12-09T10:55:32.392Z] Total : 6496.69 25.38 0.00 0.00 19671.48 3634.53 15453.90 00:12:39.213 { 00:12:39.213 "results": [ 00:12:39.213 { 00:12:39.213 "job": "TLSTESTn1", 00:12:39.213 "core_mask": "0x4", 00:12:39.213 "workload": "verify", 00:12:39.213 "status": "finished", 00:12:39.213 "verify_range": { 00:12:39.213 "start": 0, 00:12:39.213 "length": 8192 00:12:39.213 }, 00:12:39.213 "queue_depth": 128, 00:12:39.213 "io_size": 4096, 00:12:39.213 "runtime": 10.009866, 00:12:39.213 "iops": 6496.690365285609, 00:12:39.213 "mibps": 25.37769673939691, 00:12:39.213 "io_failed": 0, 00:12:39.213 "io_timeout": 0, 00:12:39.213 "avg_latency_us": 19671.483744823345, 00:12:39.213 "min_latency_us": 3634.5292576419215, 00:12:39.213 "max_latency_us": 15453.903930131004 00:12:39.213 } 00:12:39.213 ], 00:12:39.213 "core_count": 1 00:12:39.213 } 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71363 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71363 ']' 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71363 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71363 00:12:39.213 killing process with pid 71363 00:12:39.213 Received shutdown signal, test time was about 10.000000 seconds 00:12:39.213 00:12:39.213 Latency(us) 00:12:39.213 [2024-12-09T10:55:32.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.213 [2024-12-09T10:55:32.392Z] =================================================================================================================== 00:12:39.213 [2024-12-09T10:55:32.392Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71363' 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71363 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71363 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sxsC766G4v 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sxsC766G4v 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sxsC766G4v 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.sxsC766G4v 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71498 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71498 /var/tmp/bdevperf.sock 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71498 ']' 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:39.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.213 10:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:39.213 [2024-12-09 10:55:32.309807] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:39.213 [2024-12-09 10:55:32.309939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71498 ] 00:12:39.473 [2024-12-09 10:55:32.440167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.473 [2024-12-09 10:55:32.490595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.473 [2024-12-09 10:55:32.530573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:40.042 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.042 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:40.042 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sxsC766G4v 00:12:40.302 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:40.561 [2024-12-09 10:55:33.542292] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:40.562 [2024-12-09 10:55:33.546846] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:40.562 [2024-12-09 10:55:33.547517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bd030 (107): Transport endpoint is not connected 00:12:40.562 [2024-12-09 10:55:33.548505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6bd030 (9): Bad file descriptor 00:12:40.562 [2024-12-09 10:55:33.549500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:40.562 [2024-12-09 10:55:33.549546] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:40.562 [2024-12-09 10:55:33.549583] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:40.562 [2024-12-09 10:55:33.549643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:40.562 request: 00:12:40.562 { 00:12:40.562 "name": "TLSTEST", 00:12:40.562 "trtype": "tcp", 00:12:40.562 "traddr": "10.0.0.3", 00:12:40.562 "adrfam": "ipv4", 00:12:40.562 "trsvcid": "4420", 00:12:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:40.562 "prchk_reftag": false, 00:12:40.562 "prchk_guard": false, 00:12:40.562 "hdgst": false, 00:12:40.562 "ddgst": false, 00:12:40.562 "psk": "key0", 00:12:40.562 "allow_unrecognized_csi": false, 00:12:40.562 "method": "bdev_nvme_attach_controller", 00:12:40.562 "req_id": 1 00:12:40.562 } 00:12:40.562 Got JSON-RPC error response 00:12:40.562 response: 00:12:40.562 { 00:12:40.562 "code": -5, 00:12:40.562 "message": "Input/output error" 00:12:40.562 } 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71498 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71498 ']' 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71498 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71498 00:12:40.562 killing process with pid 71498 00:12:40.562 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.562 00:12:40.562 Latency(us) 00:12:40.562 [2024-12-09T10:55:33.741Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.562 [2024-12-09T10:55:33.741Z] =================================================================================================================== 00:12:40.562 [2024-12-09T10:55:33.741Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71498' 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71498 00:12:40.562 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71498 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Xp689IywgR 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Xp689IywgR 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Xp689IywgR 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Xp689IywgR 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71526 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71526 /var/tmp/bdevperf.sock 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71526 ']' 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:40.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:40.822 10:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:40.822 [2024-12-09 10:55:33.862256] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:40.822 [2024-12-09 10:55:33.862389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71526 ] 00:12:41.081 [2024-12-09 10:55:34.013583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.082 [2024-12-09 10:55:34.060324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:41.082 [2024-12-09 10:55:34.100871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:41.650 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.650 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:41.650 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Xp689IywgR 00:12:41.973 10:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:12:41.973 [2024-12-09 10:55:35.092755] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:41.973 [2024-12-09 10:55:35.103921] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:41.973 [2024-12-09 10:55:35.104026] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:41.973 [2024-12-09 10:55:35.104100] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:41.973 [2024-12-09 10:55:35.105032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472030 (107): Transport endpoint is not connected 00:12:41.973 [2024-12-09 10:55:35.106023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1472030 (9): Bad file descriptor 00:12:41.973 [2024-12-09 10:55:35.107018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:12:41.973 [2024-12-09 10:55:35.107094] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:41.973 [2024-12-09 10:55:35.107119] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:12:41.973 [2024-12-09 10:55:35.107165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:12:41.973 request: 00:12:41.973 { 00:12:41.973 "name": "TLSTEST", 00:12:41.973 "trtype": "tcp", 00:12:41.973 "traddr": "10.0.0.3", 00:12:41.973 "adrfam": "ipv4", 00:12:41.973 "trsvcid": "4420", 00:12:41.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.973 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:41.973 "prchk_reftag": false, 00:12:41.973 "prchk_guard": false, 00:12:41.973 "hdgst": false, 00:12:41.973 "ddgst": false, 00:12:41.973 "psk": "key0", 00:12:41.973 "allow_unrecognized_csi": false, 00:12:41.973 "method": "bdev_nvme_attach_controller", 00:12:41.973 "req_id": 1 00:12:41.973 } 00:12:41.973 Got JSON-RPC error response 00:12:41.973 response: 00:12:41.973 { 00:12:41.973 "code": -5, 00:12:41.973 "message": "Input/output error" 00:12:41.973 } 00:12:41.973 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71526 00:12:41.973 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71526 ']' 00:12:41.973 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71526 00:12:41.973 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:41.973 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.973 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71526 00:12:42.232 killing process with pid 71526 00:12:42.232 Received shutdown signal, test time was about 10.000000 seconds 00:12:42.232 00:12:42.232 Latency(us) 00:12:42.232 [2024-12-09T10:55:35.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.232 [2024-12-09T10:55:35.411Z] =================================================================================================================== 00:12:42.232 [2024-12-09T10:55:35.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71526' 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71526 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71526 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xp689IywgR 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xp689IywgR 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xp689IywgR 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Xp689IywgR 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71555 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71555 /var/tmp/bdevperf.sock 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71555 ']' 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:42.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.232 10:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.491 [2024-12-09 10:55:35.409776] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:42.491 [2024-12-09 10:55:35.409924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71555 ] 00:12:42.491 [2024-12-09 10:55:35.539690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.491 [2024-12-09 10:55:35.591442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.491 [2024-12-09 10:55:35.632086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:43.429 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:43.429 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:43.429 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Xp689IywgR 00:12:43.429 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:43.688 [2024-12-09 10:55:36.668259] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:43.688 [2024-12-09 10:55:36.672610] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:43.688 [2024-12-09 10:55:36.672642] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:43.688 [2024-12-09 10:55:36.672681] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:43.688 [2024-12-09 10:55:36.673387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80030 (107): Transport endpoint is not connected 00:12:43.688 [2024-12-09 10:55:36.674375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80030 (9): Bad file descriptor 00:12:43.688 [2024-12-09 10:55:36.675372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:12:43.688 [2024-12-09 10:55:36.675389] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:12:43.688 [2024-12-09 10:55:36.675396] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:12:43.688 [2024-12-09 10:55:36.675405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:12:43.688 request: 00:12:43.688 { 00:12:43.688 "name": "TLSTEST", 00:12:43.688 "trtype": "tcp", 00:12:43.688 "traddr": "10.0.0.3", 00:12:43.688 "adrfam": "ipv4", 00:12:43.688 "trsvcid": "4420", 00:12:43.688 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:43.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.688 "prchk_reftag": false, 00:12:43.688 "prchk_guard": false, 00:12:43.688 "hdgst": false, 00:12:43.688 "ddgst": false, 00:12:43.688 "psk": "key0", 00:12:43.688 "allow_unrecognized_csi": false, 00:12:43.688 "method": "bdev_nvme_attach_controller", 00:12:43.688 "req_id": 1 00:12:43.688 } 00:12:43.688 Got JSON-RPC error response 00:12:43.688 response: 00:12:43.688 { 00:12:43.688 "code": -5, 00:12:43.688 "message": "Input/output error" 00:12:43.688 } 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71555 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71555 ']' 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71555 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71555 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71555' 00:12:43.688 killing process with pid 71555 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71555 00:12:43.688 Received shutdown signal, test time was about 10.000000 seconds 00:12:43.688 00:12:43.688 Latency(us) 00:12:43.688 [2024-12-09T10:55:36.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.688 [2024-12-09T10:55:36.867Z] =================================================================================================================== 00:12:43.688 [2024-12-09T10:55:36.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:43.688 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71555 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:43.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71583 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71583 /var/tmp/bdevperf.sock 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71583 ']' 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.947 10:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:43.947 [2024-12-09 10:55:36.976251] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:43.947 [2024-12-09 10:55:36.976414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71583 ] 00:12:43.947 [2024-12-09 10:55:37.108042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.205 [2024-12-09 10:55:37.158633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.205 [2024-12-09 10:55:37.198523] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.773 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.773 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:44.773 10:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:12:45.031 [2024-12-09 10:55:38.050975] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:12:45.031 [2024-12-09 10:55:38.051095] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:12:45.031 request: 00:12:45.031 { 00:12:45.031 "name": "key0", 00:12:45.031 "path": "", 00:12:45.031 "method": "keyring_file_add_key", 00:12:45.031 "req_id": 1 00:12:45.031 } 00:12:45.031 Got JSON-RPC error response 00:12:45.031 response: 00:12:45.031 { 00:12:45.031 "code": -1, 00:12:45.031 "message": "Operation not permitted" 00:12:45.031 } 00:12:45.031 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:45.291 [2024-12-09 10:55:38.262703] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:45.291 [2024-12-09 10:55:38.262851] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:12:45.291 request: 00:12:45.291 { 00:12:45.291 "name": "TLSTEST", 00:12:45.291 "trtype": "tcp", 00:12:45.291 "traddr": "10.0.0.3", 00:12:45.291 "adrfam": "ipv4", 00:12:45.291 "trsvcid": "4420", 00:12:45.291 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.291 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:45.291 "prchk_reftag": false, 00:12:45.291 "prchk_guard": false, 00:12:45.291 "hdgst": false, 00:12:45.291 "ddgst": false, 00:12:45.291 "psk": "key0", 00:12:45.291 "allow_unrecognized_csi": false, 00:12:45.291 "method": "bdev_nvme_attach_controller", 00:12:45.291 "req_id": 1 00:12:45.291 } 00:12:45.291 Got JSON-RPC error response 00:12:45.291 response: 00:12:45.291 { 00:12:45.291 "code": -126, 00:12:45.291 "message": "Required key not available" 00:12:45.291 } 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71583 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71583 ']' 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71583 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71583 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:45.291 killing process with pid 71583 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71583' 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71583 00:12:45.291 Received shutdown signal, test time was about 10.000000 seconds 00:12:45.291 00:12:45.291 Latency(us) 00:12:45.291 [2024-12-09T10:55:38.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.291 [2024-12-09T10:55:38.470Z] =================================================================================================================== 00:12:45.291 [2024-12-09T10:55:38.470Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:45.291 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71583 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71130 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71130 ']' 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71130 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71130 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71130' 00:12:45.551 killing process with pid 71130 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71130 00:12:45.551 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71130 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ltsKTZPN6i 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ltsKTZPN6i 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71623 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71623 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71623 ']' 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.810 10:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:45.810 [2024-12-09 10:55:38.898940] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:45.810 [2024-12-09 10:55:38.899001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.070 [2024-12-09 10:55:39.043762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.070 [2024-12-09 10:55:39.090158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.070 [2024-12-09 10:55:39.090269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.070 [2024-12-09 10:55:39.090303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.070 [2024-12-09 10:55:39.090329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.070 [2024-12-09 10:55:39.090343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.070 [2024-12-09 10:55:39.090626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.070 [2024-12-09 10:55:39.131263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ltsKTZPN6i 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ltsKTZPN6i 00:12:46.641 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:46.901 [2024-12-09 10:55:39.979770] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.901 10:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:47.160 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:47.418 [2024-12-09 10:55:40.363073] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:47.418 [2024-12-09 10:55:40.363265] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:47.418 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:47.418 malloc0 00:12:47.418 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:47.676 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:12:47.935 10:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltsKTZPN6i 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ltsKTZPN6i 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71682 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71682 /var/tmp/bdevperf.sock 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71682 ']' 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:48.194 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:48.194 [2024-12-09 10:55:41.281495] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:48.194 [2024-12-09 10:55:41.281609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71682 ] 00:12:48.453 [2024-12-09 10:55:41.414117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.453 [2024-12-09 10:55:41.462348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.453 [2024-12-09 10:55:41.502234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:48.453 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.453 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:12:48.453 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:12:48.712 10:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:12:48.971 [2024-12-09 10:55:41.939152] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:48.971 TLSTESTn1 00:12:48.971 10:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:48.971 Running I/O for 10 seconds... 00:12:51.289 6538.00 IOPS, 25.54 MiB/s [2024-12-09T10:55:45.405Z] 6461.00 IOPS, 25.24 MiB/s [2024-12-09T10:55:46.342Z] 6503.00 IOPS, 25.40 MiB/s [2024-12-09T10:55:47.283Z] 6544.00 IOPS, 25.56 MiB/s [2024-12-09T10:55:48.218Z] 6580.20 IOPS, 25.70 MiB/s [2024-12-09T10:55:49.152Z] 6554.17 IOPS, 25.60 MiB/s [2024-12-09T10:55:50.527Z] 6546.43 IOPS, 25.57 MiB/s [2024-12-09T10:55:51.094Z] 6559.50 IOPS, 25.62 MiB/s [2024-12-09T10:55:52.469Z] 6560.78 IOPS, 25.63 MiB/s [2024-12-09T10:55:52.469Z] 6567.30 IOPS, 25.65 MiB/s 00:12:59.290 Latency(us) 00:12:59.290 [2024-12-09T10:55:52.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.290 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:59.290 Verification LBA range: start 0x0 length 0x2000 00:12:59.290 TLSTESTn1 : 10.01 6573.50 25.68 0.00 0.00 19441.63 3505.75 15682.85 00:12:59.290 [2024-12-09T10:55:52.469Z] =================================================================================================================== 00:12:59.290 [2024-12-09T10:55:52.469Z] Total : 6573.50 25.68 0.00 0.00 19441.63 3505.75 15682.85 00:12:59.290 { 00:12:59.290 "results": [ 00:12:59.290 { 00:12:59.290 "job": "TLSTESTn1", 00:12:59.290 "core_mask": "0x4", 00:12:59.290 "workload": "verify", 00:12:59.290 "status": "finished", 00:12:59.290 "verify_range": { 00:12:59.290 "start": 0, 00:12:59.290 "length": 8192 00:12:59.290 }, 00:12:59.290 "queue_depth": 128, 00:12:59.290 "io_size": 4096, 00:12:59.290 "runtime": 10.010046, 00:12:59.290 "iops": 6573.496265651526, 00:12:59.290 "mibps": 25.677719787701275, 00:12:59.290 "io_failed": 0, 00:12:59.290 "io_timeout": 0, 00:12:59.290 "avg_latency_us": 19441.633052947986, 00:12:59.290 "min_latency_us": 3505.7467248908297, 00:12:59.290 "max_latency_us": 15682.850655021834 00:12:59.290 } 00:12:59.290 ], 00:12:59.290 "core_count": 1 00:12:59.290 } 00:12:59.290 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71682 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71682 ']' 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71682 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71682 00:12:59.291 killing process with pid 71682 00:12:59.291 Received shutdown signal, test time was about 10.000000 seconds 00:12:59.291 00:12:59.291 Latency(us) 00:12:59.291 [2024-12-09T10:55:52.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.291 [2024-12-09T10:55:52.470Z] =================================================================================================================== 00:12:59.291 [2024-12-09T10:55:52.470Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71682' 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71682 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71682 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ltsKTZPN6i 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltsKTZPN6i 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltsKTZPN6i 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ltsKTZPN6i 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ltsKTZPN6i 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71815 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71815 /var/tmp/bdevperf.sock 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71815 ']' 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.291 10:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:59.291 [2024-12-09 10:55:52.432201] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:12:59.291 [2024-12-09 10:55:52.432312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71815 ] 00:12:59.550 [2024-12-09 10:55:52.582874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.550 [2024-12-09 10:55:52.628024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.550 [2024-12-09 10:55:52.667413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.117 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.117 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:00.117 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:00.375 [2024-12-09 10:55:53.475256] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ltsKTZPN6i': 0100666 00:13:00.375 [2024-12-09 10:55:53.475371] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:00.375 request: 00:13:00.375 { 00:13:00.375 "name": "key0", 00:13:00.375 "path": "/tmp/tmp.ltsKTZPN6i", 00:13:00.375 "method": "keyring_file_add_key", 00:13:00.375 "req_id": 1 00:13:00.375 } 00:13:00.375 Got JSON-RPC error response 00:13:00.375 response: 00:13:00.375 { 00:13:00.375 "code": -1, 00:13:00.375 "message": "Operation not permitted" 00:13:00.375 } 00:13:00.375 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:00.633 [2024-12-09 10:55:53.679001] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:00.633 [2024-12-09 10:55:53.679159] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:00.633 request: 00:13:00.633 { 00:13:00.633 "name": "TLSTEST", 00:13:00.633 "trtype": "tcp", 00:13:00.633 "traddr": "10.0.0.3", 00:13:00.633 "adrfam": "ipv4", 00:13:00.633 "trsvcid": "4420", 00:13:00.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:00.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:00.633 "prchk_reftag": false, 00:13:00.633 "prchk_guard": false, 00:13:00.633 "hdgst": false, 00:13:00.633 "ddgst": false, 00:13:00.633 "psk": "key0", 00:13:00.633 "allow_unrecognized_csi": false, 00:13:00.633 "method": "bdev_nvme_attach_controller", 00:13:00.633 "req_id": 1 00:13:00.633 } 00:13:00.633 Got JSON-RPC error response 00:13:00.633 response: 00:13:00.633 { 00:13:00.633 "code": -126, 00:13:00.633 "message": "Required key not available" 00:13:00.633 } 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71815 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71815 ']' 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71815 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71815 00:13:00.633 killing process with pid 71815 00:13:00.633 Received shutdown signal, test time was about 10.000000 seconds 00:13:00.633 00:13:00.633 Latency(us) 00:13:00.633 [2024-12-09T10:55:53.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.633 [2024-12-09T10:55:53.812Z] =================================================================================================================== 00:13:00.633 [2024-12-09T10:55:53.812Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71815' 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71815 00:13:00.633 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71815 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71623 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71623 ']' 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71623 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71623 00:13:00.892 killing process with pid 71623 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71623' 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71623 00:13:00.892 10:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71623 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71861 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71861 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71861 ']' 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.151 10:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:01.151 [2024-12-09 10:55:54.245815] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:01.151 [2024-12-09 10:55:54.245871] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.408 [2024-12-09 10:55:54.394553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.408 [2024-12-09 10:55:54.443505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.408 [2024-12-09 10:55:54.443627] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.408 [2024-12-09 10:55:54.443637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.408 [2024-12-09 10:55:54.443642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.408 [2024-12-09 10:55:54.443646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.408 [2024-12-09 10:55:54.443942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.408 [2024-12-09 10:55:54.484674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:01.975 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.975 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:01.975 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.975 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:01.975 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:02.233 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.233 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ltsKTZPN6i 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ltsKTZPN6i 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.ltsKTZPN6i 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ltsKTZPN6i 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:02.234 [2024-12-09 10:55:55.349272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.234 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:02.492 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:02.750 [2024-12-09 10:55:55.752584] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:02.750 [2024-12-09 10:55:55.752799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:02.750 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:03.008 malloc0 00:13:03.008 10:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:03.008 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:03.266 [2024-12-09 10:55:56.343860] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ltsKTZPN6i': 0100666 00:13:03.267 [2024-12-09 10:55:56.343989] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:03.267 request: 00:13:03.267 { 00:13:03.267 "name": "key0", 00:13:03.267 "path": "/tmp/tmp.ltsKTZPN6i", 00:13:03.267 "method": "keyring_file_add_key", 00:13:03.267 "req_id": 1 00:13:03.267 } 00:13:03.267 Got JSON-RPC error response 00:13:03.267 response: 00:13:03.267 { 00:13:03.267 "code": -1, 00:13:03.267 "message": "Operation not permitted" 00:13:03.267 } 00:13:03.267 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:03.526 [2024-12-09 10:55:56.523547] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:03.526 [2024-12-09 10:55:56.523680] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:03.526 request: 00:13:03.526 { 00:13:03.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.526 "host": "nqn.2016-06.io.spdk:host1", 00:13:03.526 "psk": "key0", 00:13:03.526 "method": "nvmf_subsystem_add_host", 00:13:03.526 "req_id": 1 00:13:03.526 } 00:13:03.526 Got JSON-RPC error response 00:13:03.526 response: 00:13:03.526 { 00:13:03.526 "code": -32603, 00:13:03.526 "message": "Internal error" 00:13:03.526 } 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71861 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71861 ']' 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71861 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71861 00:13:03.526 killing process with pid 71861 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71861' 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71861 00:13:03.526 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71861 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ltsKTZPN6i 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71935 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71935 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71935 ']' 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 10:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:03.786 [2024-12-09 10:55:56.847556] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:03.786 [2024-12-09 10:55:56.847614] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.045 [2024-12-09 10:55:56.981009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.045 [2024-12-09 10:55:57.042175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.045 [2024-12-09 10:55:57.042341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.045 [2024-12-09 10:55:57.042386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.045 [2024-12-09 10:55:57.042419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.045 [2024-12-09 10:55:57.042438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.045 [2024-12-09 10:55:57.042836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.045 [2024-12-09 10:55:57.092368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ltsKTZPN6i 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ltsKTZPN6i 00:13:04.614 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:04.874 [2024-12-09 10:55:57.959273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.874 10:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:05.132 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:05.392 [2024-12-09 10:55:58.362536] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:05.392 [2024-12-09 10:55:58.362722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:05.392 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:05.392 malloc0 00:13:05.651 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:05.651 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:05.910 10:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71994 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71994 /var/tmp/bdevperf.sock 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71994 ']' 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:06.168 10:55:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:06.168 [2024-12-09 10:55:59.261861] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:06.168 [2024-12-09 10:55:59.262021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71994 ] 00:13:06.426 [2024-12-09 10:55:59.415023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.426 [2024-12-09 10:55:59.459996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.426 [2024-12-09 10:55:59.500489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:06.993 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.993 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:06.993 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:07.250 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:07.510 [2024-12-09 10:56:00.484878] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:07.510 TLSTESTn1 00:13:07.510 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:07.787 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:07.787 "subsystems": [ 00:13:07.787 { 00:13:07.787 "subsystem": "keyring", 00:13:07.787 "config": [ 00:13:07.787 { 00:13:07.787 "method": "keyring_file_add_key", 00:13:07.787 "params": { 00:13:07.787 "name": "key0", 00:13:07.787 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:07.787 } 00:13:07.787 } 00:13:07.787 ] 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "subsystem": "iobuf", 00:13:07.787 "config": [ 00:13:07.787 { 00:13:07.787 "method": "iobuf_set_options", 00:13:07.787 "params": { 00:13:07.787 "small_pool_count": 8192, 00:13:07.787 "large_pool_count": 1024, 00:13:07.787 "small_bufsize": 8192, 00:13:07.787 "large_bufsize": 135168, 00:13:07.787 "enable_numa": false 00:13:07.787 } 00:13:07.787 } 00:13:07.787 ] 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "subsystem": "sock", 00:13:07.787 "config": [ 00:13:07.787 { 00:13:07.787 "method": "sock_set_default_impl", 00:13:07.787 "params": { 00:13:07.787 "impl_name": "uring" 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "method": "sock_impl_set_options", 00:13:07.787 "params": { 00:13:07.787 "impl_name": "ssl", 00:13:07.787 "recv_buf_size": 4096, 00:13:07.787 "send_buf_size": 4096, 00:13:07.787 "enable_recv_pipe": true, 00:13:07.787 "enable_quickack": false, 00:13:07.787 "enable_placement_id": 0, 00:13:07.787 "enable_zerocopy_send_server": true, 00:13:07.787 "enable_zerocopy_send_client": false, 00:13:07.787 "zerocopy_threshold": 0, 00:13:07.787 "tls_version": 0, 00:13:07.787 "enable_ktls": false 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "method": "sock_impl_set_options", 00:13:07.787 "params": { 00:13:07.787 "impl_name": "posix", 00:13:07.787 "recv_buf_size": 2097152, 00:13:07.787 "send_buf_size": 2097152, 00:13:07.787 "enable_recv_pipe": true, 00:13:07.787 "enable_quickack": false, 00:13:07.787 "enable_placement_id": 0, 00:13:07.787 "enable_zerocopy_send_server": true, 00:13:07.787 "enable_zerocopy_send_client": false, 00:13:07.787 "zerocopy_threshold": 0, 00:13:07.787 "tls_version": 0, 00:13:07.787 "enable_ktls": false 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "method": "sock_impl_set_options", 00:13:07.787 "params": { 00:13:07.787 "impl_name": "uring", 00:13:07.787 "recv_buf_size": 2097152, 00:13:07.787 "send_buf_size": 2097152, 00:13:07.787 "enable_recv_pipe": true, 00:13:07.787 "enable_quickack": false, 00:13:07.787 "enable_placement_id": 0, 00:13:07.787 "enable_zerocopy_send_server": false, 00:13:07.787 "enable_zerocopy_send_client": false, 00:13:07.787 "zerocopy_threshold": 0, 00:13:07.787 "tls_version": 0, 00:13:07.787 "enable_ktls": false 00:13:07.787 } 00:13:07.787 } 00:13:07.787 ] 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "subsystem": "vmd", 00:13:07.787 "config": [] 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "subsystem": "accel", 00:13:07.787 "config": [ 00:13:07.787 { 00:13:07.787 "method": "accel_set_options", 00:13:07.787 "params": { 00:13:07.787 "small_cache_size": 128, 00:13:07.787 "large_cache_size": 16, 00:13:07.787 "task_count": 2048, 00:13:07.787 "sequence_count": 2048, 00:13:07.787 "buf_count": 2048 00:13:07.787 } 00:13:07.787 } 00:13:07.787 ] 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "subsystem": "bdev", 00:13:07.787 "config": [ 00:13:07.787 { 00:13:07.787 "method": "bdev_set_options", 00:13:07.787 "params": { 00:13:07.787 "bdev_io_pool_size": 65535, 00:13:07.787 "bdev_io_cache_size": 256, 00:13:07.787 "bdev_auto_examine": true, 00:13:07.787 "iobuf_small_cache_size": 128, 00:13:07.787 "iobuf_large_cache_size": 16 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "method": "bdev_raid_set_options", 00:13:07.787 "params": { 00:13:07.787 "process_window_size_kb": 1024, 00:13:07.787 "process_max_bandwidth_mb_sec": 0 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "method": "bdev_iscsi_set_options", 00:13:07.787 "params": { 00:13:07.787 "timeout_sec": 30 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.787 "method": "bdev_nvme_set_options", 00:13:07.787 "params": { 00:13:07.787 "action_on_timeout": "none", 00:13:07.787 "timeout_us": 0, 00:13:07.787 "timeout_admin_us": 0, 00:13:07.787 "keep_alive_timeout_ms": 10000, 00:13:07.787 "arbitration_burst": 0, 00:13:07.787 "low_priority_weight": 0, 00:13:07.787 "medium_priority_weight": 0, 00:13:07.787 "high_priority_weight": 0, 00:13:07.787 "nvme_adminq_poll_period_us": 10000, 00:13:07.787 "nvme_ioq_poll_period_us": 0, 00:13:07.787 "io_queue_requests": 0, 00:13:07.787 "delay_cmd_submit": true, 00:13:07.787 "transport_retry_count": 4, 00:13:07.787 "bdev_retry_count": 3, 00:13:07.787 "transport_ack_timeout": 0, 00:13:07.787 "ctrlr_loss_timeout_sec": 0, 00:13:07.787 "reconnect_delay_sec": 0, 00:13:07.787 "fast_io_fail_timeout_sec": 0, 00:13:07.787 "disable_auto_failback": false, 00:13:07.787 "generate_uuids": false, 00:13:07.787 "transport_tos": 0, 00:13:07.787 "nvme_error_stat": false, 00:13:07.787 "rdma_srq_size": 0, 00:13:07.787 "io_path_stat": false, 00:13:07.787 "allow_accel_sequence": false, 00:13:07.787 "rdma_max_cq_size": 0, 00:13:07.787 "rdma_cm_event_timeout_ms": 0, 00:13:07.787 "dhchap_digests": [ 00:13:07.787 "sha256", 00:13:07.787 "sha384", 00:13:07.787 "sha512" 00:13:07.787 ], 00:13:07.787 "dhchap_dhgroups": [ 00:13:07.787 "null", 00:13:07.787 "ffdhe2048", 00:13:07.787 "ffdhe3072", 00:13:07.787 "ffdhe4096", 00:13:07.787 "ffdhe6144", 00:13:07.787 "ffdhe8192" 00:13:07.787 ] 00:13:07.787 } 00:13:07.787 }, 00:13:07.787 { 00:13:07.788 "method": "bdev_nvme_set_hotplug", 00:13:07.788 "params": { 00:13:07.788 "period_us": 100000, 00:13:07.788 "enable": false 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "bdev_malloc_create", 00:13:07.788 "params": { 00:13:07.788 "name": "malloc0", 00:13:07.788 "num_blocks": 8192, 00:13:07.788 "block_size": 4096, 00:13:07.788 "physical_block_size": 4096, 00:13:07.788 "uuid": "72f00ebb-b3a9-4c32-a212-d0f97ef800e4", 00:13:07.788 "optimal_io_boundary": 0, 00:13:07.788 "md_size": 0, 00:13:07.788 "dif_type": 0, 00:13:07.788 "dif_is_head_of_md": false, 00:13:07.788 "dif_pi_format": 0 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "bdev_wait_for_examine" 00:13:07.788 } 00:13:07.788 ] 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "subsystem": "nbd", 00:13:07.788 "config": [] 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "subsystem": "scheduler", 00:13:07.788 "config": [ 00:13:07.788 { 00:13:07.788 "method": "framework_set_scheduler", 00:13:07.788 "params": { 00:13:07.788 "name": "static" 00:13:07.788 } 00:13:07.788 } 00:13:07.788 ] 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "subsystem": "nvmf", 00:13:07.788 "config": [ 00:13:07.788 { 00:13:07.788 "method": "nvmf_set_config", 00:13:07.788 "params": { 00:13:07.788 "discovery_filter": "match_any", 00:13:07.788 "admin_cmd_passthru": { 00:13:07.788 "identify_ctrlr": false 00:13:07.788 }, 00:13:07.788 "dhchap_digests": [ 00:13:07.788 "sha256", 00:13:07.788 "sha384", 00:13:07.788 "sha512" 00:13:07.788 ], 00:13:07.788 "dhchap_dhgroups": [ 00:13:07.788 "null", 00:13:07.788 "ffdhe2048", 00:13:07.788 "ffdhe3072", 00:13:07.788 "ffdhe4096", 00:13:07.788 "ffdhe6144", 00:13:07.788 "ffdhe8192" 00:13:07.788 ] 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_set_max_subsystems", 00:13:07.788 "params": { 00:13:07.788 "max_subsystems": 1024 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_set_crdt", 00:13:07.788 "params": { 00:13:07.788 "crdt1": 0, 00:13:07.788 "crdt2": 0, 00:13:07.788 "crdt3": 0 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_create_transport", 00:13:07.788 "params": { 00:13:07.788 "trtype": "TCP", 00:13:07.788 "max_queue_depth": 128, 00:13:07.788 "max_io_qpairs_per_ctrlr": 127, 00:13:07.788 "in_capsule_data_size": 4096, 00:13:07.788 "max_io_size": 131072, 00:13:07.788 "io_unit_size": 131072, 00:13:07.788 "max_aq_depth": 128, 00:13:07.788 "num_shared_buffers": 511, 00:13:07.788 "buf_cache_size": 4294967295, 00:13:07.788 "dif_insert_or_strip": false, 00:13:07.788 "zcopy": false, 00:13:07.788 "c2h_success": false, 00:13:07.788 "sock_priority": 0, 00:13:07.788 "abort_timeout_sec": 1, 00:13:07.788 "ack_timeout": 0, 00:13:07.788 "data_wr_pool_size": 0 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_create_subsystem", 00:13:07.788 "params": { 00:13:07.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.788 "allow_any_host": false, 00:13:07.788 "serial_number": "SPDK00000000000001", 00:13:07.788 "model_number": "SPDK bdev Controller", 00:13:07.788 "max_namespaces": 10, 00:13:07.788 "min_cntlid": 1, 00:13:07.788 "max_cntlid": 65519, 00:13:07.788 "ana_reporting": false 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_subsystem_add_host", 00:13:07.788 "params": { 00:13:07.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.788 "host": "nqn.2016-06.io.spdk:host1", 00:13:07.788 "psk": "key0" 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_subsystem_add_ns", 00:13:07.788 "params": { 00:13:07.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.788 "namespace": { 00:13:07.788 "nsid": 1, 00:13:07.788 "bdev_name": "malloc0", 00:13:07.788 "nguid": "72F00EBBB3A94C32A212D0F97EF800E4", 00:13:07.788 "uuid": "72f00ebb-b3a9-4c32-a212-d0f97ef800e4", 00:13:07.788 "no_auto_visible": false 00:13:07.788 } 00:13:07.788 } 00:13:07.788 }, 00:13:07.788 { 00:13:07.788 "method": "nvmf_subsystem_add_listener", 00:13:07.788 "params": { 00:13:07.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.788 "listen_address": { 00:13:07.788 "trtype": "TCP", 00:13:07.788 "adrfam": "IPv4", 00:13:07.788 "traddr": "10.0.0.3", 00:13:07.788 "trsvcid": "4420" 00:13:07.788 }, 00:13:07.788 "secure_channel": true 00:13:07.788 } 00:13:07.788 } 00:13:07.788 ] 00:13:07.788 } 00:13:07.788 ] 00:13:07.788 }' 00:13:07.788 10:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:08.049 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:08.049 "subsystems": [ 00:13:08.049 { 00:13:08.049 "subsystem": "keyring", 00:13:08.049 "config": [ 00:13:08.049 { 00:13:08.049 "method": "keyring_file_add_key", 00:13:08.049 "params": { 00:13:08.049 "name": "key0", 00:13:08.049 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:08.049 } 00:13:08.049 } 00:13:08.049 ] 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "subsystem": "iobuf", 00:13:08.049 "config": [ 00:13:08.049 { 00:13:08.049 "method": "iobuf_set_options", 00:13:08.049 "params": { 00:13:08.049 "small_pool_count": 8192, 00:13:08.049 "large_pool_count": 1024, 00:13:08.049 "small_bufsize": 8192, 00:13:08.049 "large_bufsize": 135168, 00:13:08.049 "enable_numa": false 00:13:08.049 } 00:13:08.049 } 00:13:08.049 ] 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "subsystem": "sock", 00:13:08.049 "config": [ 00:13:08.049 { 00:13:08.049 "method": "sock_set_default_impl", 00:13:08.049 "params": { 00:13:08.049 "impl_name": "uring" 00:13:08.049 } 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "method": "sock_impl_set_options", 00:13:08.049 "params": { 00:13:08.049 "impl_name": "ssl", 00:13:08.049 "recv_buf_size": 4096, 00:13:08.049 "send_buf_size": 4096, 00:13:08.049 "enable_recv_pipe": true, 00:13:08.049 "enable_quickack": false, 00:13:08.049 "enable_placement_id": 0, 00:13:08.049 "enable_zerocopy_send_server": true, 00:13:08.049 "enable_zerocopy_send_client": false, 00:13:08.049 "zerocopy_threshold": 0, 00:13:08.049 "tls_version": 0, 00:13:08.049 "enable_ktls": false 00:13:08.049 } 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "method": "sock_impl_set_options", 00:13:08.049 "params": { 00:13:08.049 "impl_name": "posix", 00:13:08.049 "recv_buf_size": 2097152, 00:13:08.049 "send_buf_size": 2097152, 00:13:08.049 "enable_recv_pipe": true, 00:13:08.049 "enable_quickack": false, 00:13:08.049 "enable_placement_id": 0, 00:13:08.049 "enable_zerocopy_send_server": true, 00:13:08.049 "enable_zerocopy_send_client": false, 00:13:08.049 "zerocopy_threshold": 0, 00:13:08.049 "tls_version": 0, 00:13:08.049 "enable_ktls": false 00:13:08.049 } 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "method": "sock_impl_set_options", 00:13:08.049 "params": { 00:13:08.049 "impl_name": "uring", 00:13:08.049 "recv_buf_size": 2097152, 00:13:08.049 "send_buf_size": 2097152, 00:13:08.049 "enable_recv_pipe": true, 00:13:08.049 "enable_quickack": false, 00:13:08.049 "enable_placement_id": 0, 00:13:08.049 "enable_zerocopy_send_server": false, 00:13:08.049 "enable_zerocopy_send_client": false, 00:13:08.049 "zerocopy_threshold": 0, 00:13:08.049 "tls_version": 0, 00:13:08.049 "enable_ktls": false 00:13:08.049 } 00:13:08.049 } 00:13:08.049 ] 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "subsystem": "vmd", 00:13:08.049 "config": [] 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "subsystem": "accel", 00:13:08.049 "config": [ 00:13:08.049 { 00:13:08.049 "method": "accel_set_options", 00:13:08.049 "params": { 00:13:08.049 "small_cache_size": 128, 00:13:08.049 "large_cache_size": 16, 00:13:08.049 "task_count": 2048, 00:13:08.049 "sequence_count": 2048, 00:13:08.049 "buf_count": 2048 00:13:08.049 } 00:13:08.049 } 00:13:08.049 ] 00:13:08.049 }, 00:13:08.049 { 00:13:08.049 "subsystem": "bdev", 00:13:08.050 "config": [ 00:13:08.050 { 00:13:08.050 "method": "bdev_set_options", 00:13:08.050 "params": { 00:13:08.050 "bdev_io_pool_size": 65535, 00:13:08.050 "bdev_io_cache_size": 256, 00:13:08.050 "bdev_auto_examine": true, 00:13:08.050 "iobuf_small_cache_size": 128, 00:13:08.050 "iobuf_large_cache_size": 16 00:13:08.050 } 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "method": "bdev_raid_set_options", 00:13:08.050 "params": { 00:13:08.050 "process_window_size_kb": 1024, 00:13:08.050 "process_max_bandwidth_mb_sec": 0 00:13:08.050 } 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "method": "bdev_iscsi_set_options", 00:13:08.050 "params": { 00:13:08.050 "timeout_sec": 30 00:13:08.050 } 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "method": "bdev_nvme_set_options", 00:13:08.050 "params": { 00:13:08.050 "action_on_timeout": "none", 00:13:08.050 "timeout_us": 0, 00:13:08.050 "timeout_admin_us": 0, 00:13:08.050 "keep_alive_timeout_ms": 10000, 00:13:08.050 "arbitration_burst": 0, 00:13:08.050 "low_priority_weight": 0, 00:13:08.050 "medium_priority_weight": 0, 00:13:08.050 "high_priority_weight": 0, 00:13:08.050 "nvme_adminq_poll_period_us": 10000, 00:13:08.050 "nvme_ioq_poll_period_us": 0, 00:13:08.050 "io_queue_requests": 512, 00:13:08.050 "delay_cmd_submit": true, 00:13:08.050 "transport_retry_count": 4, 00:13:08.050 "bdev_retry_count": 3, 00:13:08.050 "transport_ack_timeout": 0, 00:13:08.050 "ctrlr_loss_timeout_sec": 0, 00:13:08.050 "reconnect_delay_sec": 0, 00:13:08.050 "fast_io_fail_timeout_sec": 0, 00:13:08.050 "disable_auto_failback": false, 00:13:08.050 "generate_uuids": false, 00:13:08.050 "transport_tos": 0, 00:13:08.050 "nvme_error_stat": false, 00:13:08.050 "rdma_srq_size": 0, 00:13:08.050 "io_path_stat": false, 00:13:08.050 "allow_accel_sequence": false, 00:13:08.050 "rdma_max_cq_size": 0, 00:13:08.050 "rdma_cm_event_timeout_ms": 0, 00:13:08.050 "dhchap_digests": [ 00:13:08.050 "sha256", 00:13:08.050 "sha384", 00:13:08.050 "sha512" 00:13:08.050 ], 00:13:08.050 "dhchap_dhgroups": [ 00:13:08.050 "null", 00:13:08.050 "ffdhe2048", 00:13:08.050 "ffdhe3072", 00:13:08.050 "ffdhe4096", 00:13:08.050 "ffdhe6144", 00:13:08.050 "ffdhe8192" 00:13:08.050 ] 00:13:08.050 } 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "method": "bdev_nvme_attach_controller", 00:13:08.050 "params": { 00:13:08.050 "name": "TLSTEST", 00:13:08.050 "trtype": "TCP", 00:13:08.050 "adrfam": "IPv4", 00:13:08.050 "traddr": "10.0.0.3", 00:13:08.050 "trsvcid": "4420", 00:13:08.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.050 "prchk_reftag": false, 00:13:08.050 "prchk_guard": false, 00:13:08.050 "ctrlr_loss_timeout_sec": 0, 00:13:08.050 "reconnect_delay_sec": 0, 00:13:08.050 "fast_io_fail_timeout_sec": 0, 00:13:08.050 "psk": "key0", 00:13:08.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:08.050 "hdgst": false, 00:13:08.050 "ddgst": false, 00:13:08.050 "multipath": "multipath" 00:13:08.050 } 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "method": "bdev_nvme_set_hotplug", 00:13:08.050 "params": { 00:13:08.050 "period_us": 100000, 00:13:08.050 "enable": false 00:13:08.050 } 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "method": "bdev_wait_for_examine" 00:13:08.050 } 00:13:08.050 ] 00:13:08.050 }, 00:13:08.050 { 00:13:08.050 "subsystem": "nbd", 00:13:08.050 "config": [] 00:13:08.050 } 00:13:08.050 ] 00:13:08.050 }' 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71994 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71994 ']' 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71994 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71994 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71994' 00:13:08.050 killing process with pid 71994 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71994 00:13:08.050 Received shutdown signal, test time was about 10.000000 seconds 00:13:08.050 00:13:08.050 Latency(us) 00:13:08.050 [2024-12-09T10:56:01.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.050 [2024-12-09T10:56:01.229Z] =================================================================================================================== 00:13:08.050 [2024-12-09T10:56:01.229Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:08.050 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71994 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71935 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71935 ']' 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71935 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71935 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:08.310 killing process with pid 71935 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71935' 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71935 00:13:08.310 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71935 00:13:08.570 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:08.570 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.570 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.570 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:08.570 "subsystems": [ 00:13:08.570 { 00:13:08.570 "subsystem": "keyring", 00:13:08.570 "config": [ 00:13:08.570 { 00:13:08.570 "method": "keyring_file_add_key", 00:13:08.570 "params": { 00:13:08.570 "name": "key0", 00:13:08.570 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:08.570 } 00:13:08.570 } 00:13:08.570 ] 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "subsystem": "iobuf", 00:13:08.570 "config": [ 00:13:08.570 { 00:13:08.570 "method": "iobuf_set_options", 00:13:08.570 "params": { 00:13:08.570 "small_pool_count": 8192, 00:13:08.570 "large_pool_count": 1024, 00:13:08.570 "small_bufsize": 8192, 00:13:08.570 "large_bufsize": 135168, 00:13:08.570 "enable_numa": false 00:13:08.570 } 00:13:08.570 } 00:13:08.570 ] 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "subsystem": "sock", 00:13:08.570 "config": [ 00:13:08.570 { 00:13:08.570 "method": "sock_set_default_impl", 00:13:08.570 "params": { 00:13:08.570 "impl_name": "uring" 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "sock_impl_set_options", 00:13:08.570 "params": { 00:13:08.570 "impl_name": "ssl", 00:13:08.570 "recv_buf_size": 4096, 00:13:08.570 "send_buf_size": 4096, 00:13:08.570 "enable_recv_pipe": true, 00:13:08.570 "enable_quickack": false, 00:13:08.570 "enable_placement_id": 0, 00:13:08.570 "enable_zerocopy_send_server": true, 00:13:08.570 "enable_zerocopy_send_client": false, 00:13:08.570 "zerocopy_threshold": 0, 00:13:08.570 "tls_version": 0, 00:13:08.570 "enable_ktls": false 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "sock_impl_set_options", 00:13:08.570 "params": { 00:13:08.570 "impl_name": "posix", 00:13:08.570 "recv_buf_size": 2097152, 00:13:08.570 "send_buf_size": 2097152, 00:13:08.570 "enable_recv_pipe": true, 00:13:08.570 "enable_quickack": false, 00:13:08.570 "enable_placement_id": 0, 00:13:08.570 "enable_zerocopy_send_server": true, 00:13:08.570 "enable_zerocopy_send_client": false, 00:13:08.570 "zerocopy_threshold": 0, 00:13:08.570 "tls_version": 0, 00:13:08.570 "enable_ktls": false 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "sock_impl_set_options", 00:13:08.570 "params": { 00:13:08.570 "impl_name": "uring", 00:13:08.570 "recv_buf_size": 2097152, 00:13:08.570 "send_buf_size": 2097152, 00:13:08.570 "enable_recv_pipe": true, 00:13:08.570 "enable_quickack": false, 00:13:08.570 "enable_placement_id": 0, 00:13:08.570 "enable_zerocopy_send_server": false, 00:13:08.570 "enable_zerocopy_send_client": false, 00:13:08.570 "zerocopy_threshold": 0, 00:13:08.570 "tls_version": 0, 00:13:08.570 "enable_ktls": false 00:13:08.570 } 00:13:08.570 } 00:13:08.570 ] 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "subsystem": "vmd", 00:13:08.570 "config": [] 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "subsystem": "accel", 00:13:08.570 "config": [ 00:13:08.570 { 00:13:08.570 "method": "accel_set_options", 00:13:08.570 "params": { 00:13:08.570 "small_cache_size": 128, 00:13:08.570 "large_cache_size": 16, 00:13:08.570 "task_count": 2048, 00:13:08.570 "sequence_count": 2048, 00:13:08.570 "buf_count": 2048 00:13:08.570 } 00:13:08.570 } 00:13:08.570 ] 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "subsystem": "bdev", 00:13:08.570 "config": [ 00:13:08.570 { 00:13:08.570 "method": "bdev_set_options", 00:13:08.570 "params": { 00:13:08.570 "bdev_io_pool_size": 65535, 00:13:08.570 "bdev_io_cache_size": 256, 00:13:08.570 "bdev_auto_examine": true, 00:13:08.570 "iobuf_small_cache_size": 128, 00:13:08.570 "iobuf_large_cache_size": 16 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "bdev_raid_set_options", 00:13:08.570 "params": { 00:13:08.570 "process_window_size_kb": 1024, 00:13:08.570 "process_max_bandwidth_mb_sec": 0 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "bdev_iscsi_set_options", 00:13:08.570 "params": { 00:13:08.570 "timeout_sec": 30 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "bdev_nvme_set_options", 00:13:08.570 "params": { 00:13:08.570 "action_on_timeout": "none", 00:13:08.570 "timeout_us": 0, 00:13:08.570 "timeout_admin_us": 0, 00:13:08.570 "keep_alive_timeout_ms": 10000, 00:13:08.570 "arbitration_burst": 0, 00:13:08.570 "low_priority_weight": 0, 00:13:08.570 "medium_priority_weight": 0, 00:13:08.570 "high_priority_weight": 0, 00:13:08.570 "nvme_adminq_poll_period_us": 10000, 00:13:08.570 "nvme_ioq_poll_period_us": 0, 00:13:08.570 "io_queue_requests": 0, 00:13:08.570 "delay_cmd_submit": true, 00:13:08.570 "transport_retry_count": 4, 00:13:08.570 "bdev_retry_count": 3, 00:13:08.570 "transport_ack_timeout": 0, 00:13:08.570 "ctrlr_loss_timeout_sec": 0, 00:13:08.570 "reconnect_delay_sec": 0, 00:13:08.570 "fast_io_fail_timeout_sec": 0, 00:13:08.570 "disable_auto_failback": false, 00:13:08.570 "generate_uuids": false, 00:13:08.570 "transport_tos": 0, 00:13:08.570 "nvme_error_stat": false, 00:13:08.570 "rdma_srq_size": 0, 00:13:08.570 "io_path_stat": false, 00:13:08.570 "allow_accel_sequence": false, 00:13:08.570 "rdma_max_cq_size": 0, 00:13:08.570 "rdma_cm_event_timeout_ms": 0, 00:13:08.570 "dhchap_digests": [ 00:13:08.570 "sha256", 00:13:08.570 "sha384", 00:13:08.570 "sha512" 00:13:08.570 ], 00:13:08.570 "dhchap_dhgroups": [ 00:13:08.570 "null", 00:13:08.570 "ffdhe2048", 00:13:08.570 "ffdhe3072", 00:13:08.570 "ffdhe4096", 00:13:08.570 "ffdhe6144", 00:13:08.570 "ffdhe8192" 00:13:08.570 ] 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "bdev_nvme_set_hotplug", 00:13:08.570 "params": { 00:13:08.570 "period_us": 100000, 00:13:08.570 "enable": false 00:13:08.570 } 00:13:08.570 }, 00:13:08.570 { 00:13:08.570 "method": "bdev_malloc_create", 00:13:08.570 "params": { 00:13:08.570 "name": "malloc0", 00:13:08.570 "num_blocks": 8192, 00:13:08.570 "block_size": 4096, 00:13:08.570 "physical_block_size": 4096, 00:13:08.570 "uuid": "72f00ebb-b3a9-4c32-a212-d0f97ef800e4", 00:13:08.570 "optimal_io_boundary": 0, 00:13:08.570 "md_size": 0, 00:13:08.570 "dif_type": 0, 00:13:08.570 "dif_is_head_of_md": false, 00:13:08.570 "dif_pi_format": 0 00:13:08.570 } 00:13:08.570 }, 00:13:08.571 { 00:13:08.571 "method": "bdev_wait_for_examine" 00:13:08.571 } 00:13:08.571 ] 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "subsystem": "nbd", 00:13:08.571 "config": [] 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "subsystem": "scheduler", 00:13:08.571 "config": [ 00:13:08.571 { 00:13:08.571 "method": "framework_set_scheduler", 00:13:08.571 "params": { 00:13:08.571 "name": "static" 00:13:08.571 } 00:13:08.571 } 00:13:08.571 ] 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "subsystem": "nvmf", 00:13:08.571 "config": [ 00:13:08.571 { 00:13:08.571 "method": "nvmf_set_config", 00:13:08.571 "params": { 00:13:08.571 "discovery_filter": "match_any", 00:13:08.571 "admin_cmd_passthru": { 00:13:08.571 "identify_ctrlr": false 00:13:08.571 }, 00:13:08.571 "dhchap_digests": [ 00:13:08.571 "sha256", 00:13:08.571 "sha384", 00:13:08.571 "sha512" 00:13:08.571 ], 00:13:08.571 "dhchap_dhgroups": [ 00:13:08.571 "null", 00:13:08.571 "ffdhe2048", 00:13:08.571 "ffdhe3072", 00:13:08.571 "ffdhe4096", 00:13:08.571 "ffdhe6144", 00:13:08.571 "ffdhe8192" 00:13:08.571 ] 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_set_max_subsystems", 00:13:08.571 "params": { 00:13:08.571 "max_subsystems": 1024 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_set_crdt", 00:13:08.571 "params": { 00:13:08.571 "crdt1": 0, 00:13:08.571 "crdt2": 0, 00:13:08.571 "crdt3": 0 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_create_transport", 00:13:08.571 "params": { 00:13:08.571 "trtype": "TCP", 00:13:08.571 "max_queue_depth": 128, 00:13:08.571 "max_io_qpairs_per_ctrlr": 127, 00:13:08.571 "in_capsule_data_size": 4096, 00:13:08.571 "max_io_size": 131072, 00:13:08.571 "io_unit_size": 131072, 00:13:08.571 "max_aq_depth": 128, 00:13:08.571 "num_shared_buffers": 511, 00:13:08.571 "buf_cache_size": 4294967295, 00:13:08.571 "dif_insert_or_strip": false, 00:13:08.571 "zcopy": false, 00:13:08.571 "c2h_success": false, 00:13:08.571 "sock_priority": 0, 00:13:08.571 "abort_timeout_sec": 1, 00:13:08.571 "ack_timeout": 0, 00:13:08.571 "data_wr_pool_size": 0 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_create_subsystem", 00:13:08.571 "params": { 00:13:08.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.571 "allow_any_host": false, 00:13:08.571 "serial_number": "SPDK00000000000001", 00:13:08.571 "model_number": "SPDK bdev Controller", 00:13:08.571 "max_namespaces": 10, 00:13:08.571 "min_cntlid": 1, 00:13:08.571 "max_cntlid": 65519, 00:13:08.571 "ana_reporting": false 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_subsystem_add_host", 00:13:08.571 "params": { 00:13:08.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.571 "host": "nqn.2016-06.io.spdk:host1", 00:13:08.571 "psk": "key0" 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_subsystem_add_ns", 00:13:08.571 "params": { 00:13:08.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.571 "namespace": { 00:13:08.571 "nsid": 1, 00:13:08.571 "bdev_name": "malloc0", 00:13:08.571 "nguid": "72F00EBBB3A94C32A212D0F97EF800E4", 00:13:08.571 "uuid": "72f00ebb-b3a9-4c32-a212-d0f97ef800e4", 00:13:08.571 "no_auto_visible": false 00:13:08.571 } 00:13:08.571 } 00:13:08.571 }, 00:13:08.571 { 00:13:08.571 "method": "nvmf_subsystem_add_listener", 00:13:08.571 "params": { 00:13:08.571 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:08.571 "listen_address": { 00:13:08.571 "trtype": "TCP", 00:13:08.571 "adrfam": "IPv4", 00:13:08.571 "traddr": "10.0.0.3", 00:13:08.571 "trsvcid": "4420" 00:13:08.571 }, 00:13:08.571 "secure_channel": true 00:13:08.571 } 00:13:08.571 } 00:13:08.571 ] 00:13:08.571 } 00:13:08.571 ] 00:13:08.571 }' 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72038 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72038 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72038 ']' 00:13:08.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.571 10:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:08.571 [2024-12-09 10:56:01.708535] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:08.571 [2024-12-09 10:56:01.708595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.830 [2024-12-09 10:56:01.857760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.830 [2024-12-09 10:56:01.901504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.830 [2024-12-09 10:56:01.901549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.830 [2024-12-09 10:56:01.901555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.830 [2024-12-09 10:56:01.901560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.830 [2024-12-09 10:56:01.901564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.830 [2024-12-09 10:56:01.901888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.089 [2024-12-09 10:56:02.055936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.089 [2024-12-09 10:56:02.126392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.089 [2024-12-09 10:56:02.158382] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:09.089 [2024-12-09 10:56:02.158604] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72070 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72070 /var/tmp/bdevperf.sock 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72070 ']' 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:09.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.655 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:09.655 "subsystems": [ 00:13:09.655 { 00:13:09.655 "subsystem": "keyring", 00:13:09.655 "config": [ 00:13:09.655 { 00:13:09.655 "method": "keyring_file_add_key", 00:13:09.655 "params": { 00:13:09.655 "name": "key0", 00:13:09.655 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:09.655 } 00:13:09.655 } 00:13:09.655 ] 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "subsystem": "iobuf", 00:13:09.655 "config": [ 00:13:09.655 { 00:13:09.655 "method": "iobuf_set_options", 00:13:09.655 "params": { 00:13:09.655 "small_pool_count": 8192, 00:13:09.655 "large_pool_count": 1024, 00:13:09.655 "small_bufsize": 8192, 00:13:09.655 "large_bufsize": 135168, 00:13:09.655 "enable_numa": false 00:13:09.655 } 00:13:09.655 } 00:13:09.655 ] 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "subsystem": "sock", 00:13:09.655 "config": [ 00:13:09.655 { 00:13:09.655 "method": "sock_set_default_impl", 00:13:09.655 "params": { 00:13:09.655 "impl_name": "uring" 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "sock_impl_set_options", 00:13:09.655 "params": { 00:13:09.655 "impl_name": "ssl", 00:13:09.655 "recv_buf_size": 4096, 00:13:09.655 "send_buf_size": 4096, 00:13:09.655 "enable_recv_pipe": true, 00:13:09.655 "enable_quickack": false, 00:13:09.655 "enable_placement_id": 0, 00:13:09.655 "enable_zerocopy_send_server": true, 00:13:09.655 "enable_zerocopy_send_client": false, 00:13:09.655 "zerocopy_threshold": 0, 00:13:09.655 "tls_version": 0, 00:13:09.655 "enable_ktls": false 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "sock_impl_set_options", 00:13:09.655 "params": { 00:13:09.655 "impl_name": "posix", 00:13:09.655 "recv_buf_size": 2097152, 00:13:09.655 "send_buf_size": 2097152, 00:13:09.655 "enable_recv_pipe": true, 00:13:09.655 "enable_quickack": false, 00:13:09.655 "enable_placement_id": 0, 00:13:09.655 "enable_zerocopy_send_server": true, 00:13:09.655 "enable_zerocopy_send_client": false, 00:13:09.655 "zerocopy_threshold": 0, 00:13:09.655 "tls_version": 0, 00:13:09.655 "enable_ktls": false 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "sock_impl_set_options", 00:13:09.655 "params": { 00:13:09.655 "impl_name": "uring", 00:13:09.655 "recv_buf_size": 2097152, 00:13:09.655 "send_buf_size": 2097152, 00:13:09.655 "enable_recv_pipe": true, 00:13:09.655 "enable_quickack": false, 00:13:09.655 "enable_placement_id": 0, 00:13:09.655 "enable_zerocopy_send_server": false, 00:13:09.655 "enable_zerocopy_send_client": false, 00:13:09.655 "zerocopy_threshold": 0, 00:13:09.655 "tls_version": 0, 00:13:09.655 "enable_ktls": false 00:13:09.655 } 00:13:09.655 } 00:13:09.655 ] 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "subsystem": "vmd", 00:13:09.655 "config": [] 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "subsystem": "accel", 00:13:09.655 "config": [ 00:13:09.655 { 00:13:09.655 "method": "accel_set_options", 00:13:09.655 "params": { 00:13:09.655 "small_cache_size": 128, 00:13:09.655 "large_cache_size": 16, 00:13:09.655 "task_count": 2048, 00:13:09.655 "sequence_count": 2048, 00:13:09.655 "buf_count": 2048 00:13:09.655 } 00:13:09.655 } 00:13:09.655 ] 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "subsystem": "bdev", 00:13:09.655 "config": [ 00:13:09.655 { 00:13:09.655 "method": "bdev_set_options", 00:13:09.655 "params": { 00:13:09.655 "bdev_io_pool_size": 65535, 00:13:09.655 "bdev_io_cache_size": 256, 00:13:09.655 "bdev_auto_examine": true, 00:13:09.655 "iobuf_small_cache_size": 128, 00:13:09.655 "iobuf_large_cache_size": 16 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "bdev_raid_set_options", 00:13:09.655 "params": { 00:13:09.655 "process_window_size_kb": 1024, 00:13:09.655 "process_max_bandwidth_mb_sec": 0 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "bdev_iscsi_set_options", 00:13:09.655 "params": { 00:13:09.655 "timeout_sec": 30 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "bdev_nvme_set_options", 00:13:09.655 "params": { 00:13:09.655 "action_on_timeout": "none", 00:13:09.655 "timeout_us": 0, 00:13:09.655 "timeout_admin_us": 0, 00:13:09.655 "keep_alive_timeout_ms": 10000, 00:13:09.655 "arbitration_burst": 0, 00:13:09.655 "low_priority_weight": 0, 00:13:09.655 "medium_priority_weight": 0, 00:13:09.655 "high_priority_weight": 0, 00:13:09.655 "nvme_adminq_poll_period_us": 10000, 00:13:09.655 "nvme_ioq_poll_period_us": 0, 00:13:09.655 "io_queue_requests": 512, 00:13:09.655 "delay_cmd_submit": true, 00:13:09.655 "transport_retry_count": 4, 00:13:09.655 "bdev_retry_count": 3, 00:13:09.655 "transport_ack_timeout": 0, 00:13:09.655 "ctrlr_loss_timeout_sec": 0, 00:13:09.655 "reconnect_delay_sec": 0, 00:13:09.655 "fast_io_fail_timeout_sec": 0, 00:13:09.655 "disable_auto_failback": false, 00:13:09.655 "generate_uuids": false, 00:13:09.655 "transport_tos": 0, 00:13:09.655 "nvme_error_stat": false, 00:13:09.655 "rdma_srq_size": 0, 00:13:09.655 "io_path_stat": false, 00:13:09.655 "allow_accel_sequence": false, 00:13:09.655 "rdma_max_cq_size": 0, 00:13:09.655 "rdma_cm_event_timeout_ms": 0, 00:13:09.655 "dhchap_digests": [ 00:13:09.655 "sha256", 00:13:09.655 "sha384", 00:13:09.655 "sha512" 00:13:09.655 ], 00:13:09.655 "dhchap_dhgroups": [ 00:13:09.655 "null", 00:13:09.655 "ffdhe2048", 00:13:09.655 "ffdhe3072", 00:13:09.655 "ffdhe4096", 00:13:09.655 "ffdhe6144", 00:13:09.655 "ffdhe8192" 00:13:09.655 ] 00:13:09.655 } 00:13:09.655 }, 00:13:09.655 { 00:13:09.655 "method": "bdev_nvme_attach_controller", 00:13:09.655 "params": { 00:13:09.656 "name": "TLSTEST", 00:13:09.656 "trtype": "TCP", 00:13:09.656 "adrfam": "IPv4", 00:13:09.656 "traddr": "10.0.0.3", 00:13:09.656 "trsvcid": "4420", 00:13:09.656 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.656 "prchk_reftag": false, 00:13:09.656 "prchk_guard": false, 00:13:09.656 "ctrlr_loss_timeout_sec": 0, 00:13:09.656 "reconnect_delay_sec": 0, 00:13:09.656 "fast_io_fail_timeout_sec": 0, 00:13:09.656 "psk": "key0", 00:13:09.656 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.656 "hdgst": false, 00:13:09.656 "ddgst": false, 00:13:09.656 "multipath": "multipath" 00:13:09.656 } 00:13:09.656 }, 00:13:09.656 { 00:13:09.656 "method": "bdev_nvme_set_hotplug", 00:13:09.656 "params": { 00:13:09.656 "period_us": 100000, 00:13:09.656 "enable": false 00:13:09.656 } 00:13:09.656 }, 00:13:09.656 { 00:13:09.656 "method": "bdev_wait_for_examine" 00:13:09.656 } 00:13:09.656 ] 00:13:09.656 }, 00:13:09.656 { 00:13:09.656 "subsystem": "nbd", 00:13:09.656 "config": [] 00:13:09.656 } 00:13:09.656 ] 00:13:09.656 }' 00:13:09.656 10:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.656 [2024-12-09 10:56:02.666727] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:09.656 [2024-12-09 10:56:02.666852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72070 ] 00:13:09.656 [2024-12-09 10:56:02.818449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.914 [2024-12-09 10:56:02.866485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.914 [2024-12-09 10:56:02.987693] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.914 [2024-12-09 10:56:03.030160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:10.480 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.480 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:10.480 10:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:10.480 Running I/O for 10 seconds... 00:13:12.793 6753.00 IOPS, 26.38 MiB/s [2024-12-09T10:56:06.909Z] 6792.50 IOPS, 26.53 MiB/s [2024-12-09T10:56:07.845Z] 6832.00 IOPS, 26.69 MiB/s [2024-12-09T10:56:08.783Z] 6837.25 IOPS, 26.71 MiB/s [2024-12-09T10:56:09.720Z] 6835.80 IOPS, 26.70 MiB/s [2024-12-09T10:56:10.657Z] 6835.00 IOPS, 26.70 MiB/s [2024-12-09T10:56:12.036Z] 6844.71 IOPS, 26.74 MiB/s [2024-12-09T10:56:12.973Z] 6852.50 IOPS, 26.77 MiB/s [2024-12-09T10:56:13.910Z] 6858.00 IOPS, 26.79 MiB/s [2024-12-09T10:56:13.910Z] 6853.50 IOPS, 26.77 MiB/s 00:13:20.731 Latency(us) 00:13:20.731 [2024-12-09T10:56:13.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.731 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:20.731 Verification LBA range: start 0x0 length 0x2000 00:13:20.731 TLSTESTn1 : 10.01 6858.92 26.79 0.00 0.00 18632.62 3806.24 15110.48 00:13:20.731 [2024-12-09T10:56:13.910Z] =================================================================================================================== 00:13:20.731 [2024-12-09T10:56:13.910Z] Total : 6858.92 26.79 0.00 0.00 18632.62 3806.24 15110.48 00:13:20.731 { 00:13:20.731 "results": [ 00:13:20.731 { 00:13:20.731 "job": "TLSTESTn1", 00:13:20.731 "core_mask": "0x4", 00:13:20.731 "workload": "verify", 00:13:20.731 "status": "finished", 00:13:20.731 "verify_range": { 00:13:20.731 "start": 0, 00:13:20.731 "length": 8192 00:13:20.731 }, 00:13:20.731 "queue_depth": 128, 00:13:20.731 "io_size": 4096, 00:13:20.731 "runtime": 10.010321, 00:13:20.731 "iops": 6858.920907731131, 00:13:20.731 "mibps": 26.79265979582473, 00:13:20.731 "io_failed": 0, 00:13:20.731 "io_timeout": 0, 00:13:20.731 "avg_latency_us": 18632.616089050913, 00:13:20.731 "min_latency_us": 3806.2393013100436, 00:13:20.731 "max_latency_us": 15110.48384279476 00:13:20.731 } 00:13:20.731 ], 00:13:20.731 "core_count": 1 00:13:20.731 } 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72070 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72070 ']' 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72070 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72070 00:13:20.731 killing process with pid 72070 00:13:20.731 Received shutdown signal, test time was about 10.000000 seconds 00:13:20.731 00:13:20.731 Latency(us) 00:13:20.731 [2024-12-09T10:56:13.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.731 [2024-12-09T10:56:13.910Z] =================================================================================================================== 00:13:20.731 [2024-12-09T10:56:13.910Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72070' 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72070 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72070 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72038 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72038 ']' 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72038 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:20.731 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72038 00:13:20.991 killing process with pid 72038 00:13:20.991 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:20.991 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:20.991 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72038' 00:13:20.991 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72038 00:13:20.991 10:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72038 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72203 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72203 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72203 ']' 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.991 10:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:21.253 [2024-12-09 10:56:14.210024] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:21.253 [2024-12-09 10:56:14.210133] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.253 [2024-12-09 10:56:14.348400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.253 [2024-12-09 10:56:14.394665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.253 [2024-12-09 10:56:14.394816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.253 [2024-12-09 10:56:14.394853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.253 [2024-12-09 10:56:14.394879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.253 [2024-12-09 10:56:14.394896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.253 [2024-12-09 10:56:14.395219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.512 [2024-12-09 10:56:14.436048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ltsKTZPN6i 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ltsKTZPN6i 00:13:22.080 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:22.339 [2024-12-09 10:56:15.280825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.339 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:22.598 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:22.598 [2024-12-09 10:56:15.692154] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:22.598 [2024-12-09 10:56:15.692406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:22.598 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:22.858 malloc0 00:13:22.858 10:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:23.117 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:23.117 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72259 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72259 /var/tmp/bdevperf.sock 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72259 ']' 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.377 10:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.377 [2024-12-09 10:56:16.542184] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:23.377 [2024-12-09 10:56:16.542318] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72259 ] 00:13:23.636 [2024-12-09 10:56:16.693164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.636 [2024-12-09 10:56:16.738269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.637 [2024-12-09 10:56:16.779035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:24.205 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.205 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:24.205 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:24.464 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:24.723 [2024-12-09 10:56:17.735754] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:24.723 nvme0n1 00:13:24.723 10:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:24.983 Running I/O for 1 seconds... 00:13:26.013 6668.00 IOPS, 26.05 MiB/s 00:13:26.013 Latency(us) 00:13:26.013 [2024-12-09T10:56:19.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.013 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:26.013 Verification LBA range: start 0x0 length 0x2000 00:13:26.013 nvme0n1 : 1.01 6728.51 26.28 0.00 0.00 18904.52 3720.38 14309.17 00:13:26.013 [2024-12-09T10:56:19.192Z] =================================================================================================================== 00:13:26.013 [2024-12-09T10:56:19.192Z] Total : 6728.51 26.28 0.00 0.00 18904.52 3720.38 14309.17 00:13:26.013 { 00:13:26.013 "results": [ 00:13:26.013 { 00:13:26.013 "job": "nvme0n1", 00:13:26.013 "core_mask": "0x2", 00:13:26.013 "workload": "verify", 00:13:26.013 "status": "finished", 00:13:26.013 "verify_range": { 00:13:26.013 "start": 0, 00:13:26.013 "length": 8192 00:13:26.013 }, 00:13:26.013 "queue_depth": 128, 00:13:26.013 "io_size": 4096, 00:13:26.013 "runtime": 1.010031, 00:13:26.013 "iops": 6728.506352775311, 00:13:26.013 "mibps": 26.28322794052856, 00:13:26.013 "io_failed": 0, 00:13:26.013 "io_timeout": 0, 00:13:26.013 "avg_latency_us": 18904.517396567724, 00:13:26.013 "min_latency_us": 3720.3842794759826, 00:13:26.013 "max_latency_us": 14309.170305676857 00:13:26.013 } 00:13:26.013 ], 00:13:26.013 "core_count": 1 00:13:26.013 } 00:13:26.013 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72259 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72259 ']' 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72259 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72259 00:13:26.014 killing process with pid 72259 00:13:26.014 Received shutdown signal, test time was about 1.000000 seconds 00:13:26.014 00:13:26.014 Latency(us) 00:13:26.014 [2024-12-09T10:56:19.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.014 [2024-12-09T10:56:19.193Z] =================================================================================================================== 00:13:26.014 [2024-12-09T10:56:19.193Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72259' 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72259 00:13:26.014 10:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72259 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72203 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72203 ']' 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72203 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72203 00:13:26.273 killing process with pid 72203 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72203' 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72203 00:13:26.273 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72203 00:13:26.274 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:13:26.274 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:26.274 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:26.274 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72304 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72304 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72304 ']' 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.533 10:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.533 [2024-12-09 10:56:19.507043] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:26.533 [2024-12-09 10:56:19.507184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.533 [2024-12-09 10:56:19.656490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.533 [2024-12-09 10:56:19.699405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.533 [2024-12-09 10:56:19.699552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.533 [2024-12-09 10:56:19.699586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.533 [2024-12-09 10:56:19.699612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.533 [2024-12-09 10:56:19.699627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.533 [2024-12-09 10:56:19.699918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.792 [2024-12-09 10:56:19.739287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.361 [2024-12-09 10:56:20.407602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:27.361 malloc0 00:13:27.361 [2024-12-09 10:56:20.435601] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:27.361 [2024-12-09 10:56:20.435799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72336 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72336 /var/tmp/bdevperf.sock 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72336 ']' 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.361 10:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:27.361 [2024-12-09 10:56:20.517160] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:27.361 [2024-12-09 10:56:20.517275] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72336 ] 00:13:27.620 [2024-12-09 10:56:20.667505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.620 [2024-12-09 10:56:20.712148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.620 [2024-12-09 10:56:20.752719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:28.189 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.189 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:28.189 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ltsKTZPN6i 00:13:28.448 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:28.706 [2024-12-09 10:56:21.748786] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:28.706 nvme0n1 00:13:28.706 10:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:28.965 Running I/O for 1 seconds... 00:13:29.900 6670.00 IOPS, 26.05 MiB/s 00:13:29.900 Latency(us) 00:13:29.900 [2024-12-09T10:56:23.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.900 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:29.900 Verification LBA range: start 0x0 length 0x2000 00:13:29.900 nvme0n1 : 1.01 6732.98 26.30 0.00 0.00 18894.09 3548.67 14538.12 00:13:29.900 [2024-12-09T10:56:23.079Z] =================================================================================================================== 00:13:29.900 [2024-12-09T10:56:23.079Z] Total : 6732.98 26.30 0.00 0.00 18894.09 3548.67 14538.12 00:13:29.900 { 00:13:29.900 "results": [ 00:13:29.900 { 00:13:29.900 "job": "nvme0n1", 00:13:29.900 "core_mask": "0x2", 00:13:29.900 "workload": "verify", 00:13:29.900 "status": "finished", 00:13:29.900 "verify_range": { 00:13:29.900 "start": 0, 00:13:29.900 "length": 8192 00:13:29.900 }, 00:13:29.900 "queue_depth": 128, 00:13:29.900 "io_size": 4096, 00:13:29.900 "runtime": 1.009805, 00:13:29.900 "iops": 6732.983100697659, 00:13:29.900 "mibps": 26.30071523710023, 00:13:29.900 "io_failed": 0, 00:13:29.900 "io_timeout": 0, 00:13:29.900 "avg_latency_us": 18894.09028376251, 00:13:29.900 "min_latency_us": 3548.6742358078604, 00:13:29.900 "max_latency_us": 14538.117030567686 00:13:29.900 } 00:13:29.900 ], 00:13:29.900 "core_count": 1 00:13:29.900 } 00:13:29.900 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:13:29.900 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.901 10:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.160 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.160 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:13:30.160 "subsystems": [ 00:13:30.160 { 00:13:30.160 "subsystem": "keyring", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "keyring_file_add_key", 00:13:30.160 "params": { 00:13:30.160 "name": "key0", 00:13:30.160 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:30.160 } 00:13:30.160 } 00:13:30.160 ] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "iobuf", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "iobuf_set_options", 00:13:30.160 "params": { 00:13:30.160 "small_pool_count": 8192, 00:13:30.160 "large_pool_count": 1024, 00:13:30.160 "small_bufsize": 8192, 00:13:30.160 "large_bufsize": 135168, 00:13:30.160 "enable_numa": false 00:13:30.160 } 00:13:30.160 } 00:13:30.160 ] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "sock", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "sock_set_default_impl", 00:13:30.160 "params": { 00:13:30.160 "impl_name": "uring" 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "sock_impl_set_options", 00:13:30.160 "params": { 00:13:30.160 "impl_name": "ssl", 00:13:30.160 "recv_buf_size": 4096, 00:13:30.160 "send_buf_size": 4096, 00:13:30.160 "enable_recv_pipe": true, 00:13:30.160 "enable_quickack": false, 00:13:30.160 "enable_placement_id": 0, 00:13:30.160 "enable_zerocopy_send_server": true, 00:13:30.160 "enable_zerocopy_send_client": false, 00:13:30.160 "zerocopy_threshold": 0, 00:13:30.160 "tls_version": 0, 00:13:30.160 "enable_ktls": false 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "sock_impl_set_options", 00:13:30.160 "params": { 00:13:30.160 "impl_name": "posix", 00:13:30.160 "recv_buf_size": 2097152, 00:13:30.160 "send_buf_size": 2097152, 00:13:30.160 "enable_recv_pipe": true, 00:13:30.160 "enable_quickack": false, 00:13:30.160 "enable_placement_id": 0, 00:13:30.160 "enable_zerocopy_send_server": true, 00:13:30.160 "enable_zerocopy_send_client": false, 00:13:30.160 "zerocopy_threshold": 0, 00:13:30.160 "tls_version": 0, 00:13:30.160 "enable_ktls": false 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "sock_impl_set_options", 00:13:30.160 "params": { 00:13:30.160 "impl_name": "uring", 00:13:30.160 "recv_buf_size": 2097152, 00:13:30.160 "send_buf_size": 2097152, 00:13:30.160 "enable_recv_pipe": true, 00:13:30.160 "enable_quickack": false, 00:13:30.160 "enable_placement_id": 0, 00:13:30.160 "enable_zerocopy_send_server": false, 00:13:30.160 "enable_zerocopy_send_client": false, 00:13:30.160 "zerocopy_threshold": 0, 00:13:30.160 "tls_version": 0, 00:13:30.160 "enable_ktls": false 00:13:30.160 } 00:13:30.160 } 00:13:30.160 ] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "vmd", 00:13:30.160 "config": [] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "accel", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "accel_set_options", 00:13:30.160 "params": { 00:13:30.160 "small_cache_size": 128, 00:13:30.160 "large_cache_size": 16, 00:13:30.160 "task_count": 2048, 00:13:30.160 "sequence_count": 2048, 00:13:30.160 "buf_count": 2048 00:13:30.160 } 00:13:30.160 } 00:13:30.160 ] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "bdev", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "bdev_set_options", 00:13:30.160 "params": { 00:13:30.160 "bdev_io_pool_size": 65535, 00:13:30.160 "bdev_io_cache_size": 256, 00:13:30.160 "bdev_auto_examine": true, 00:13:30.160 "iobuf_small_cache_size": 128, 00:13:30.160 "iobuf_large_cache_size": 16 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "bdev_raid_set_options", 00:13:30.160 "params": { 00:13:30.160 "process_window_size_kb": 1024, 00:13:30.160 "process_max_bandwidth_mb_sec": 0 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "bdev_iscsi_set_options", 00:13:30.160 "params": { 00:13:30.160 "timeout_sec": 30 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "bdev_nvme_set_options", 00:13:30.160 "params": { 00:13:30.160 "action_on_timeout": "none", 00:13:30.160 "timeout_us": 0, 00:13:30.160 "timeout_admin_us": 0, 00:13:30.160 "keep_alive_timeout_ms": 10000, 00:13:30.160 "arbitration_burst": 0, 00:13:30.160 "low_priority_weight": 0, 00:13:30.160 "medium_priority_weight": 0, 00:13:30.160 "high_priority_weight": 0, 00:13:30.160 "nvme_adminq_poll_period_us": 10000, 00:13:30.160 "nvme_ioq_poll_period_us": 0, 00:13:30.160 "io_queue_requests": 0, 00:13:30.160 "delay_cmd_submit": true, 00:13:30.160 "transport_retry_count": 4, 00:13:30.160 "bdev_retry_count": 3, 00:13:30.160 "transport_ack_timeout": 0, 00:13:30.160 "ctrlr_loss_timeout_sec": 0, 00:13:30.160 "reconnect_delay_sec": 0, 00:13:30.160 "fast_io_fail_timeout_sec": 0, 00:13:30.160 "disable_auto_failback": false, 00:13:30.160 "generate_uuids": false, 00:13:30.160 "transport_tos": 0, 00:13:30.160 "nvme_error_stat": false, 00:13:30.160 "rdma_srq_size": 0, 00:13:30.160 "io_path_stat": false, 00:13:30.160 "allow_accel_sequence": false, 00:13:30.160 "rdma_max_cq_size": 0, 00:13:30.160 "rdma_cm_event_timeout_ms": 0, 00:13:30.160 "dhchap_digests": [ 00:13:30.160 "sha256", 00:13:30.160 "sha384", 00:13:30.160 "sha512" 00:13:30.160 ], 00:13:30.160 "dhchap_dhgroups": [ 00:13:30.160 "null", 00:13:30.160 "ffdhe2048", 00:13:30.160 "ffdhe3072", 00:13:30.160 "ffdhe4096", 00:13:30.160 "ffdhe6144", 00:13:30.160 "ffdhe8192" 00:13:30.160 ] 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "bdev_nvme_set_hotplug", 00:13:30.160 "params": { 00:13:30.160 "period_us": 100000, 00:13:30.160 "enable": false 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "bdev_malloc_create", 00:13:30.160 "params": { 00:13:30.160 "name": "malloc0", 00:13:30.160 "num_blocks": 8192, 00:13:30.160 "block_size": 4096, 00:13:30.160 "physical_block_size": 4096, 00:13:30.160 "uuid": "5e04f93d-6f6d-4db2-8618-c33f3c2fb871", 00:13:30.160 "optimal_io_boundary": 0, 00:13:30.160 "md_size": 0, 00:13:30.160 "dif_type": 0, 00:13:30.160 "dif_is_head_of_md": false, 00:13:30.160 "dif_pi_format": 0 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "bdev_wait_for_examine" 00:13:30.160 } 00:13:30.160 ] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "nbd", 00:13:30.160 "config": [] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "scheduler", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "framework_set_scheduler", 00:13:30.160 "params": { 00:13:30.160 "name": "static" 00:13:30.160 } 00:13:30.160 } 00:13:30.160 ] 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "subsystem": "nvmf", 00:13:30.160 "config": [ 00:13:30.160 { 00:13:30.160 "method": "nvmf_set_config", 00:13:30.160 "params": { 00:13:30.160 "discovery_filter": "match_any", 00:13:30.160 "admin_cmd_passthru": { 00:13:30.160 "identify_ctrlr": false 00:13:30.160 }, 00:13:30.160 "dhchap_digests": [ 00:13:30.160 "sha256", 00:13:30.160 "sha384", 00:13:30.160 "sha512" 00:13:30.160 ], 00:13:30.160 "dhchap_dhgroups": [ 00:13:30.160 "null", 00:13:30.160 "ffdhe2048", 00:13:30.160 "ffdhe3072", 00:13:30.160 "ffdhe4096", 00:13:30.160 "ffdhe6144", 00:13:30.160 "ffdhe8192" 00:13:30.160 ] 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "nvmf_set_max_subsystems", 00:13:30.160 "params": { 00:13:30.160 "max_subsystems": 1024 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "nvmf_set_crdt", 00:13:30.160 "params": { 00:13:30.160 "crdt1": 0, 00:13:30.160 "crdt2": 0, 00:13:30.160 "crdt3": 0 00:13:30.160 } 00:13:30.160 }, 00:13:30.160 { 00:13:30.160 "method": "nvmf_create_transport", 00:13:30.160 "params": { 00:13:30.160 "trtype": "TCP", 00:13:30.160 "max_queue_depth": 128, 00:13:30.161 "max_io_qpairs_per_ctrlr": 127, 00:13:30.161 "in_capsule_data_size": 4096, 00:13:30.161 "max_io_size": 131072, 00:13:30.161 "io_unit_size": 131072, 00:13:30.161 "max_aq_depth": 128, 00:13:30.161 "num_shared_buffers": 511, 00:13:30.161 "buf_cache_size": 4294967295, 00:13:30.161 "dif_insert_or_strip": false, 00:13:30.161 "zcopy": false, 00:13:30.161 "c2h_success": false, 00:13:30.161 "sock_priority": 0, 00:13:30.161 "abort_timeout_sec": 1, 00:13:30.161 "ack_timeout": 0, 00:13:30.161 "data_wr_pool_size": 0 00:13:30.161 } 00:13:30.161 }, 00:13:30.161 { 00:13:30.161 "method": "nvmf_create_subsystem", 00:13:30.161 "params": { 00:13:30.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.161 "allow_any_host": false, 00:13:30.161 "serial_number": "00000000000000000000", 00:13:30.161 "model_number": "SPDK bdev Controller", 00:13:30.161 "max_namespaces": 32, 00:13:30.161 "min_cntlid": 1, 00:13:30.161 "max_cntlid": 65519, 00:13:30.161 "ana_reporting": false 00:13:30.161 } 00:13:30.161 }, 00:13:30.161 { 00:13:30.161 "method": "nvmf_subsystem_add_host", 00:13:30.161 "params": { 00:13:30.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.161 "host": "nqn.2016-06.io.spdk:host1", 00:13:30.161 "psk": "key0" 00:13:30.161 } 00:13:30.161 }, 00:13:30.161 { 00:13:30.161 "method": "nvmf_subsystem_add_ns", 00:13:30.161 "params": { 00:13:30.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.161 "namespace": { 00:13:30.161 "nsid": 1, 00:13:30.161 "bdev_name": "malloc0", 00:13:30.161 "nguid": "5E04F93D6F6D4DB28618C33F3C2FB871", 00:13:30.161 "uuid": "5e04f93d-6f6d-4db2-8618-c33f3c2fb871", 00:13:30.161 "no_auto_visible": false 00:13:30.161 } 00:13:30.161 } 00:13:30.161 }, 00:13:30.161 { 00:13:30.161 "method": "nvmf_subsystem_add_listener", 00:13:30.161 "params": { 00:13:30.161 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.161 "listen_address": { 00:13:30.161 "trtype": "TCP", 00:13:30.161 "adrfam": "IPv4", 00:13:30.161 "traddr": "10.0.0.3", 00:13:30.161 "trsvcid": "4420" 00:13:30.161 }, 00:13:30.161 "secure_channel": false, 00:13:30.161 "sock_impl": "ssl" 00:13:30.161 } 00:13:30.161 } 00:13:30.161 ] 00:13:30.161 } 00:13:30.161 ] 00:13:30.161 }' 00:13:30.161 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:30.420 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:13:30.420 "subsystems": [ 00:13:30.420 { 00:13:30.420 "subsystem": "keyring", 00:13:30.420 "config": [ 00:13:30.420 { 00:13:30.420 "method": "keyring_file_add_key", 00:13:30.420 "params": { 00:13:30.420 "name": "key0", 00:13:30.421 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:30.421 } 00:13:30.421 } 00:13:30.421 ] 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "subsystem": "iobuf", 00:13:30.421 "config": [ 00:13:30.421 { 00:13:30.421 "method": "iobuf_set_options", 00:13:30.421 "params": { 00:13:30.421 "small_pool_count": 8192, 00:13:30.421 "large_pool_count": 1024, 00:13:30.421 "small_bufsize": 8192, 00:13:30.421 "large_bufsize": 135168, 00:13:30.421 "enable_numa": false 00:13:30.421 } 00:13:30.421 } 00:13:30.421 ] 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "subsystem": "sock", 00:13:30.421 "config": [ 00:13:30.421 { 00:13:30.421 "method": "sock_set_default_impl", 00:13:30.421 "params": { 00:13:30.421 "impl_name": "uring" 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "sock_impl_set_options", 00:13:30.421 "params": { 00:13:30.421 "impl_name": "ssl", 00:13:30.421 "recv_buf_size": 4096, 00:13:30.421 "send_buf_size": 4096, 00:13:30.421 "enable_recv_pipe": true, 00:13:30.421 "enable_quickack": false, 00:13:30.421 "enable_placement_id": 0, 00:13:30.421 "enable_zerocopy_send_server": true, 00:13:30.421 "enable_zerocopy_send_client": false, 00:13:30.421 "zerocopy_threshold": 0, 00:13:30.421 "tls_version": 0, 00:13:30.421 "enable_ktls": false 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "sock_impl_set_options", 00:13:30.421 "params": { 00:13:30.421 "impl_name": "posix", 00:13:30.421 "recv_buf_size": 2097152, 00:13:30.421 "send_buf_size": 2097152, 00:13:30.421 "enable_recv_pipe": true, 00:13:30.421 "enable_quickack": false, 00:13:30.421 "enable_placement_id": 0, 00:13:30.421 "enable_zerocopy_send_server": true, 00:13:30.421 "enable_zerocopy_send_client": false, 00:13:30.421 "zerocopy_threshold": 0, 00:13:30.421 "tls_version": 0, 00:13:30.421 "enable_ktls": false 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "sock_impl_set_options", 00:13:30.421 "params": { 00:13:30.421 "impl_name": "uring", 00:13:30.421 "recv_buf_size": 2097152, 00:13:30.421 "send_buf_size": 2097152, 00:13:30.421 "enable_recv_pipe": true, 00:13:30.421 "enable_quickack": false, 00:13:30.421 "enable_placement_id": 0, 00:13:30.421 "enable_zerocopy_send_server": false, 00:13:30.421 "enable_zerocopy_send_client": false, 00:13:30.421 "zerocopy_threshold": 0, 00:13:30.421 "tls_version": 0, 00:13:30.421 "enable_ktls": false 00:13:30.421 } 00:13:30.421 } 00:13:30.421 ] 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "subsystem": "vmd", 00:13:30.421 "config": [] 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "subsystem": "accel", 00:13:30.421 "config": [ 00:13:30.421 { 00:13:30.421 "method": "accel_set_options", 00:13:30.421 "params": { 00:13:30.421 "small_cache_size": 128, 00:13:30.421 "large_cache_size": 16, 00:13:30.421 "task_count": 2048, 00:13:30.421 "sequence_count": 2048, 00:13:30.421 "buf_count": 2048 00:13:30.421 } 00:13:30.421 } 00:13:30.421 ] 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "subsystem": "bdev", 00:13:30.421 "config": [ 00:13:30.421 { 00:13:30.421 "method": "bdev_set_options", 00:13:30.421 "params": { 00:13:30.421 "bdev_io_pool_size": 65535, 00:13:30.421 "bdev_io_cache_size": 256, 00:13:30.421 "bdev_auto_examine": true, 00:13:30.421 "iobuf_small_cache_size": 128, 00:13:30.421 "iobuf_large_cache_size": 16 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_raid_set_options", 00:13:30.421 "params": { 00:13:30.421 "process_window_size_kb": 1024, 00:13:30.421 "process_max_bandwidth_mb_sec": 0 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_iscsi_set_options", 00:13:30.421 "params": { 00:13:30.421 "timeout_sec": 30 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_nvme_set_options", 00:13:30.421 "params": { 00:13:30.421 "action_on_timeout": "none", 00:13:30.421 "timeout_us": 0, 00:13:30.421 "timeout_admin_us": 0, 00:13:30.421 "keep_alive_timeout_ms": 10000, 00:13:30.421 "arbitration_burst": 0, 00:13:30.421 "low_priority_weight": 0, 00:13:30.421 "medium_priority_weight": 0, 00:13:30.421 "high_priority_weight": 0, 00:13:30.421 "nvme_adminq_poll_period_us": 10000, 00:13:30.421 "nvme_ioq_poll_period_us": 0, 00:13:30.421 "io_queue_requests": 512, 00:13:30.421 "delay_cmd_submit": true, 00:13:30.421 "transport_retry_count": 4, 00:13:30.421 "bdev_retry_count": 3, 00:13:30.421 "transport_ack_timeout": 0, 00:13:30.421 "ctrlr_loss_timeout_sec": 0, 00:13:30.421 "reconnect_delay_sec": 0, 00:13:30.421 "fast_io_fail_timeout_sec": 0, 00:13:30.421 "disable_auto_failback": false, 00:13:30.421 "generate_uuids": false, 00:13:30.421 "transport_tos": 0, 00:13:30.421 "nvme_error_stat": false, 00:13:30.421 "rdma_srq_size": 0, 00:13:30.421 "io_path_stat": false, 00:13:30.421 "allow_accel_sequence": false, 00:13:30.421 "rdma_max_cq_size": 0, 00:13:30.421 "rdma_cm_event_timeout_ms": 0, 00:13:30.421 "dhchap_digests": [ 00:13:30.421 "sha256", 00:13:30.421 "sha384", 00:13:30.421 "sha512" 00:13:30.421 ], 00:13:30.421 "dhchap_dhgroups": [ 00:13:30.421 "null", 00:13:30.421 "ffdhe2048", 00:13:30.421 "ffdhe3072", 00:13:30.421 "ffdhe4096", 00:13:30.421 "ffdhe6144", 00:13:30.421 "ffdhe8192" 00:13:30.421 ] 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_nvme_attach_controller", 00:13:30.421 "params": { 00:13:30.421 "name": "nvme0", 00:13:30.421 "trtype": "TCP", 00:13:30.421 "adrfam": "IPv4", 00:13:30.421 "traddr": "10.0.0.3", 00:13:30.421 "trsvcid": "4420", 00:13:30.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.421 "prchk_reftag": false, 00:13:30.421 "prchk_guard": false, 00:13:30.421 "ctrlr_loss_timeout_sec": 0, 00:13:30.421 "reconnect_delay_sec": 0, 00:13:30.421 "fast_io_fail_timeout_sec": 0, 00:13:30.421 "psk": "key0", 00:13:30.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.421 "hdgst": false, 00:13:30.421 "ddgst": false, 00:13:30.421 "multipath": "multipath" 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_nvme_set_hotplug", 00:13:30.421 "params": { 00:13:30.421 "period_us": 100000, 00:13:30.421 "enable": false 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_enable_histogram", 00:13:30.421 "params": { 00:13:30.421 "name": "nvme0n1", 00:13:30.421 "enable": true 00:13:30.421 } 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "method": "bdev_wait_for_examine" 00:13:30.421 } 00:13:30.421 ] 00:13:30.421 }, 00:13:30.421 { 00:13:30.421 "subsystem": "nbd", 00:13:30.421 "config": [] 00:13:30.421 } 00:13:30.421 ] 00:13:30.421 }' 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72336 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72336 ']' 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72336 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72336 00:13:30.421 killing process with pid 72336 00:13:30.421 Received shutdown signal, test time was about 1.000000 seconds 00:13:30.421 00:13:30.421 Latency(us) 00:13:30.421 [2024-12-09T10:56:23.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.421 [2024-12-09T10:56:23.600Z] =================================================================================================================== 00:13:30.421 [2024-12-09T10:56:23.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:30.421 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:30.422 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72336' 00:13:30.422 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72336 00:13:30.422 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72336 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72304 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72304 ']' 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72304 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72304 00:13:30.681 killing process with pid 72304 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72304' 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72304 00:13:30.681 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72304 00:13:30.940 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:13:30.940 "subsystems": [ 00:13:30.940 { 00:13:30.940 "subsystem": "keyring", 00:13:30.940 "config": [ 00:13:30.940 { 00:13:30.940 "method": "keyring_file_add_key", 00:13:30.940 "params": { 00:13:30.940 "name": "key0", 00:13:30.940 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:30.940 } 00:13:30.940 } 00:13:30.940 ] 00:13:30.940 }, 00:13:30.940 { 00:13:30.940 "subsystem": "iobuf", 00:13:30.940 "config": [ 00:13:30.940 { 00:13:30.940 "method": "iobuf_set_options", 00:13:30.940 "params": { 00:13:30.940 "small_pool_count": 8192, 00:13:30.940 "large_pool_count": 1024, 00:13:30.941 "small_bufsize": 8192, 00:13:30.941 "large_bufsize": 135168, 00:13:30.941 "enable_numa": false 00:13:30.941 } 00:13:30.941 } 00:13:30.941 ] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "sock", 00:13:30.941 "config": [ 00:13:30.941 { 00:13:30.941 "method": "sock_set_default_impl", 00:13:30.941 "params": { 00:13:30.941 "impl_name": "uring" 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "sock_impl_set_options", 00:13:30.941 "params": { 00:13:30.941 "impl_name": "ssl", 00:13:30.941 "recv_buf_size": 4096, 00:13:30.941 "send_buf_size": 4096, 00:13:30.941 "enable_recv_pipe": true, 00:13:30.941 "enable_quickack": false, 00:13:30.941 "enable_placement_id": 0, 00:13:30.941 "enable_zerocopy_send_server": true, 00:13:30.941 "enable_zerocopy_send_client": false, 00:13:30.941 "zerocopy_threshold": 0, 00:13:30.941 "tls_version": 0, 00:13:30.941 "enable_ktls": false 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "sock_impl_set_options", 00:13:30.941 "params": { 00:13:30.941 "impl_name": "posix", 00:13:30.941 "recv_buf_size": 2097152, 00:13:30.941 "send_buf_size": 2097152, 00:13:30.941 "enable_recv_pipe": true, 00:13:30.941 "enable_quickack": false, 00:13:30.941 "enable_placement_id": 0, 00:13:30.941 "enable_zerocopy_send_server": true, 00:13:30.941 "enable_zerocopy_send_client": false, 00:13:30.941 "zerocopy_threshold": 0, 00:13:30.941 "tls_version": 0, 00:13:30.941 "enable_ktls": false 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "sock_impl_set_options", 00:13:30.941 "params": { 00:13:30.941 "impl_name": "uring", 00:13:30.941 "recv_buf_size": 2097152, 00:13:30.941 "send_buf_size": 2097152, 00:13:30.941 "enable_recv_pipe": true, 00:13:30.941 "enable_quickack": false, 00:13:30.941 "enable_placement_id": 0, 00:13:30.941 "enable_zerocopy_send_server": false, 00:13:30.941 "enable_zerocopy_send_client": false, 00:13:30.941 "zerocopy_threshold": 0, 00:13:30.941 "tls_version": 0, 00:13:30.941 "enable_ktls": false 00:13:30.941 } 00:13:30.941 } 00:13:30.941 ] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "vmd", 00:13:30.941 "config": [] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "accel", 00:13:30.941 "config": [ 00:13:30.941 { 00:13:30.941 "method": "accel_set_options", 00:13:30.941 "params": { 00:13:30.941 "small_cache_size": 128, 00:13:30.941 "large_cache_size": 16, 00:13:30.941 "task_count": 2048, 00:13:30.941 "sequence_count": 2048, 00:13:30.941 "buf_count": 2048 00:13:30.941 } 00:13:30.941 } 00:13:30.941 ] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "bdev", 00:13:30.941 "config": [ 00:13:30.941 { 00:13:30.941 "method": "bdev_set_options", 00:13:30.941 "params": { 00:13:30.941 "bdev_io_pool_size": 65535, 00:13:30.941 "bdev_io_cache_size": 256, 00:13:30.941 "bdev_auto_examine": true, 00:13:30.941 "iobuf_small_cache_size": 128, 00:13:30.941 "iobuf_large_cache_size": 16 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "bdev_raid_set_options", 00:13:30.941 "params": { 00:13:30.941 "process_window_size_kb": 1024, 00:13:30.941 "process_max_bandwidth_mb_sec": 0 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "bdev_iscsi_set_options", 00:13:30.941 "params": { 00:13:30.941 "timeout_sec": 30 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "bdev_nvme_set_options", 00:13:30.941 "params": { 00:13:30.941 "action_on_timeout": "none", 00:13:30.941 "timeout_us": 0, 00:13:30.941 "timeout_admin_us": 0, 00:13:30.941 "keep_alive_timeout_ms": 10000, 00:13:30.941 "arbitration_burst": 0, 00:13:30.941 "low_priority_weight": 0, 00:13:30.941 "medium_priority_weight": 0, 00:13:30.941 "high_priority_weight": 0, 00:13:30.941 "nvme_adminq_poll_period_us": 10000, 00:13:30.941 "nvme_ioq_poll_period_us": 0, 00:13:30.941 "io_queue_requests": 0, 00:13:30.941 "delay_cmd_submit": true, 00:13:30.941 "transport_retry_count": 4, 00:13:30.941 "bdev_retry_count": 3, 00:13:30.941 "transport_ack_timeout": 0, 00:13:30.941 "ctrlr_loss_timeout_sec": 0, 00:13:30.941 "reconnect_delay_sec": 0, 00:13:30.941 "fast_io_fail_timeout_sec": 0, 00:13:30.941 "disable_auto_failback": false, 00:13:30.941 "generate_uuids": false, 00:13:30.941 "transport_tos": 0, 00:13:30.941 "nvme_error_stat": false, 00:13:30.941 "rdma_srq_size": 0, 00:13:30.941 "io_path_stat": false, 00:13:30.941 "allow_accel_sequence": false, 00:13:30.941 "rdma_max_cq_size": 0, 00:13:30.941 "rdma_cm_event_timeout_ms": 0, 00:13:30.941 "dhchap_digests": [ 00:13:30.941 "sha256", 00:13:30.941 "sha384", 00:13:30.941 "sha512" 00:13:30.941 ], 00:13:30.941 "dhchap_dhgroups": [ 00:13:30.941 "null", 00:13:30.941 "ffdhe2048", 00:13:30.941 "ffdhe3072", 00:13:30.941 "ffdhe4096", 00:13:30.941 "ffdhe6144", 00:13:30.941 "ffdhe8192" 00:13:30.941 ] 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "bdev_nvme_set_hotplug", 00:13:30.941 "params": { 00:13:30.941 "period_us": 100000, 00:13:30.941 "enable": false 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "bdev_malloc_create", 00:13:30.941 "params": { 00:13:30.941 "name": "malloc0", 00:13:30.941 "num_blocks": 8192, 00:13:30.941 "block_size": 4096, 00:13:30.941 "physical_block_size": 4096, 00:13:30.941 "uuid": "5e04f93d-6f6d-4db2-8618-c33f3c2fb871", 00:13:30.941 "optimal_io_boundary": 0, 00:13:30.941 "md_size": 0, 00:13:30.941 "dif_type": 0, 00:13:30.941 "dif_is_head_of_md": false, 00:13:30.941 "dif_pi_format": 0 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "bdev_wait_for_examine" 00:13:30.941 } 00:13:30.941 ] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "nbd", 00:13:30.941 "config": [] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "scheduler", 00:13:30.941 "config": [ 00:13:30.941 { 00:13:30.941 "method": "framework_set_scheduler", 00:13:30.941 "params": { 00:13:30.941 "name": "static" 00:13:30.941 } 00:13:30.941 } 00:13:30.941 ] 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "subsystem": "nvmf", 00:13:30.941 "config": [ 00:13:30.941 { 00:13:30.941 "method": "nvmf_set_config", 00:13:30.941 "params": { 00:13:30.941 "discovery_filter": "match_any", 00:13:30.941 "admin_cmd_passthru": { 00:13:30.941 "identify_ctrlr": false 00:13:30.941 }, 00:13:30.941 "dhchap_digests": [ 00:13:30.941 "sha256", 00:13:30.941 "sha384", 00:13:30.941 "sha512" 00:13:30.941 ], 00:13:30.941 "dhchap_dhgroups": [ 00:13:30.941 "null", 00:13:30.941 "ffdhe2048", 00:13:30.941 "ffdhe3072", 00:13:30.941 "ffdhe4096", 00:13:30.941 "ffdhe6144", 00:13:30.941 "ffdhe8192" 00:13:30.941 ] 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "nvmf_set_max_subsystems", 00:13:30.941 "params": { 00:13:30.941 "max_subsystems": 1024 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "nvmf_set_crdt", 00:13:30.941 "params": { 00:13:30.941 "crdt1": 0, 00:13:30.941 "crdt2": 0, 00:13:30.941 "crdt3": 0 00:13:30.941 } 00:13:30.941 }, 00:13:30.941 { 00:13:30.941 "method": "nvmf_create_transport", 00:13:30.941 "params": { 00:13:30.941 "trtype": "TCP", 00:13:30.941 "max_queue_depth": 128, 00:13:30.941 "max_io_qpairs_per_ctrlr": 127, 00:13:30.941 "in_capsule_data_size": 4096, 00:13:30.941 "max_io_size": 131072, 00:13:30.941 "io_unit_size": 131072, 00:13:30.941 "max_aq_depth": 128, 00:13:30.941 "num_shared_buffers": 511, 00:13:30.941 "buf_cache_size": 4294967295, 00:13:30.941 "dif_insert_or_strip": false, 00:13:30.941 "zcopy": false, 00:13:30.941 "c2h_success": false, 00:13:30.941 "sock_priority": 0, 00:13:30.941 "abort_timeout_sec": 1, 00:13:30.941 "ack_timeout": 0, 00:13:30.941 "data_wr_pool_size": 0 00:13:30.941 } 00:13:30.942 }, 00:13:30.942 { 00:13:30.942 "method": "nvmf_create_subsystem", 00:13:30.942 "params": { 00:13:30.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.942 "allow_any_host": false, 00:13:30.942 "serial_number": "00000000000000000000", 00:13:30.942 "model_number": "SPDK bdev Controller", 00:13:30.942 "max_namespaces": 32, 00:13:30.942 "min_cntlid": 1, 00:13:30.942 "max_cntlid": 65519, 00:13:30.942 "ana_reporting": false 00:13:30.942 } 00:13:30.942 }, 00:13:30.942 { 00:13:30.942 "method": "nvmf_subsystem_add_host", 00:13:30.942 "params": { 00:13:30.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.942 "host": "nqn.2016-06.io.spdk:host1", 00:13:30.942 "psk": "key0" 00:13:30.942 } 00:13:30.942 }, 00:13:30.942 { 00:13:30.942 "method": "nvmf_subsystem_add_ns", 00:13:30.942 "params": { 00:13:30.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.942 "namespace": { 00:13:30.942 "nsid": 1, 00:13:30.942 "bdev_name": "malloc0", 00:13:30.942 "nguid": "5E04F93D6F6D4DB28618C33F3C2FB871", 00:13:30.942 "uuid": "5e04f93d-6f6d-4db2-8618-c33f3c2fb871", 00:13:30.942 "no_auto_visible": false 00:13:30.942 } 00:13:30.942 } 00:13:30.942 }, 00:13:30.942 { 00:13:30.942 "method": "nvmf_subsystem_add_listener", 00:13:30.942 "params": { 00:13:30.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.942 "listen_address": { 00:13:30.942 "trtype": "TCP", 00:13:30.942 "adrfam": "IPv4", 00:13:30.942 "traddr": "10.0.0.3", 00:13:30.942 "trsvcid": "4420" 00:13:30.942 }, 00:13:30.942 "secure_channel": false, 00:13:30.942 "sock_impl": "ssl" 00:13:30.942 } 00:13:30.942 } 00:13:30.942 ] 00:13:30.942 } 00:13:30.942 ] 00:13:30.942 }' 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72397 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72397 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72397 ']' 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.942 10:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.942 [2024-12-09 10:56:23.925285] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:30.942 [2024-12-09 10:56:23.925409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.942 [2024-12-09 10:56:24.075099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.942 [2024-12-09 10:56:24.117436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.942 [2024-12-09 10:56:24.117482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.942 [2024-12-09 10:56:24.117488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.942 [2024-12-09 10:56:24.117493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.942 [2024-12-09 10:56:24.117497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.942 [2024-12-09 10:56:24.117828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.201 [2024-12-09 10:56:24.270096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:31.201 [2024-12-09 10:56:24.339414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.201 [2024-12-09 10:56:24.371290] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:31.201 [2024-12-09 10:56:24.371446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72429 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72429 /var/tmp/bdevperf.sock 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72429 ']' 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.770 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:13:31.770 "subsystems": [ 00:13:31.770 { 00:13:31.770 "subsystem": "keyring", 00:13:31.770 "config": [ 00:13:31.770 { 00:13:31.770 "method": "keyring_file_add_key", 00:13:31.770 "params": { 00:13:31.770 "name": "key0", 00:13:31.770 "path": "/tmp/tmp.ltsKTZPN6i" 00:13:31.770 } 00:13:31.770 } 00:13:31.770 ] 00:13:31.770 }, 00:13:31.770 { 00:13:31.770 "subsystem": "iobuf", 00:13:31.770 "config": [ 00:13:31.770 { 00:13:31.770 "method": "iobuf_set_options", 00:13:31.770 "params": { 00:13:31.770 "small_pool_count": 8192, 00:13:31.770 "large_pool_count": 1024, 00:13:31.770 "small_bufsize": 8192, 00:13:31.770 "large_bufsize": 135168, 00:13:31.770 "enable_numa": false 00:13:31.770 } 00:13:31.770 } 00:13:31.770 ] 00:13:31.770 }, 00:13:31.770 { 00:13:31.770 "subsystem": "sock", 00:13:31.770 "config": [ 00:13:31.770 { 00:13:31.770 "method": "sock_set_default_impl", 00:13:31.770 "params": { 00:13:31.770 "impl_name": "uring" 00:13:31.770 } 00:13:31.770 }, 00:13:31.770 { 00:13:31.770 "method": "sock_impl_set_options", 00:13:31.770 "params": { 00:13:31.770 "impl_name": "ssl", 00:13:31.770 "recv_buf_size": 4096, 00:13:31.770 "send_buf_size": 4096, 00:13:31.770 "enable_recv_pipe": true, 00:13:31.770 "enable_quickack": false, 00:13:31.770 "enable_placement_id": 0, 00:13:31.770 "enable_zerocopy_send_server": true, 00:13:31.770 "enable_zerocopy_send_client": false, 00:13:31.770 "zerocopy_threshold": 0, 00:13:31.770 "tls_version": 0, 00:13:31.770 "enable_ktls": false 00:13:31.770 } 00:13:31.770 }, 00:13:31.770 { 00:13:31.770 "method": "sock_impl_set_options", 00:13:31.770 "params": { 00:13:31.770 "impl_name": "posix", 00:13:31.770 "recv_buf_size": 2097152, 00:13:31.770 "send_buf_size": 2097152, 00:13:31.770 "enable_recv_pipe": true, 00:13:31.770 "enable_quickack": false, 00:13:31.770 "enable_placement_id": 0, 00:13:31.770 "enable_zerocopy_send_server": true, 00:13:31.770 "enable_zerocopy_send_client": false, 00:13:31.770 "zerocopy_threshold": 0, 00:13:31.770 "tls_version": 0, 00:13:31.770 "enable_ktls": false 00:13:31.770 } 00:13:31.770 }, 00:13:31.770 { 00:13:31.771 "method": "sock_impl_set_options", 00:13:31.771 "params": { 00:13:31.771 "impl_name": "uring", 00:13:31.771 "recv_buf_size": 2097152, 00:13:31.771 "send_buf_size": 2097152, 00:13:31.771 "enable_recv_pipe": true, 00:13:31.771 "enable_quickack": false, 00:13:31.771 "enable_placement_id": 0, 00:13:31.771 "enable_zerocopy_send_server": false, 00:13:31.771 "enable_zerocopy_send_client": false, 00:13:31.771 "zerocopy_threshold": 0, 00:13:31.771 "tls_version": 0, 00:13:31.771 "enable_ktls": false 00:13:31.771 } 00:13:31.771 } 00:13:31.771 ] 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "subsystem": "vmd", 00:13:31.771 "config": [] 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "subsystem": "accel", 00:13:31.771 "config": [ 00:13:31.771 { 00:13:31.771 "method": "accel_set_options", 00:13:31.771 "params": { 00:13:31.771 "small_cache_size": 128, 00:13:31.771 "large_cache_size": 16, 00:13:31.771 "task_count": 2048, 00:13:31.771 "sequence_count": 2048, 00:13:31.771 "buf_count": 2048 00:13:31.771 } 00:13:31.771 } 00:13:31.771 ] 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "subsystem": "bdev", 00:13:31.771 "config": [ 00:13:31.771 { 00:13:31.771 "method": "bdev_set_options", 00:13:31.771 "params": { 00:13:31.771 "bdev_io_pool_size": 65535, 00:13:31.771 "bdev_io_cache_size": 256, 00:13:31.771 "bdev_auto_examine": true, 00:13:31.771 "iobuf_small_cache_size": 128, 00:13:31.771 "iobuf_large_cache_size": 16 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_raid_set_options", 00:13:31.771 "params": { 00:13:31.771 "process_window_size_kb": 1024, 00:13:31.771 "process_max_bandwidth_mb_sec": 0 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_iscsi_set_options", 00:13:31.771 "params": { 00:13:31.771 "timeout_sec": 30 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_nvme_set_options", 00:13:31.771 "params": { 00:13:31.771 "action_on_timeout": "none", 00:13:31.771 "timeout_us": 0, 00:13:31.771 "timeout_admin_us": 0, 00:13:31.771 "keep_alive_timeout_ms": 10000, 00:13:31.771 "arbitration_burst": 0, 00:13:31.771 "low_priority_weight": 0, 00:13:31.771 "medium_priority_weight": 0, 00:13:31.771 "high_priority_weight": 0, 00:13:31.771 "nvme_adminq_poll_period_us": 10000, 00:13:31.771 "nvme_ioq_poll_period_us": 0, 00:13:31.771 "io_queue_requests": 512, 00:13:31.771 "delay_cmd_submit": true, 00:13:31.771 "transport_retry_count": 4, 00:13:31.771 "bdev_retry_count": 3, 00:13:31.771 "transport_ack_timeout": 0, 00:13:31.771 "ctrlr_loss_timeout_sec": 0, 00:13:31.771 "reconnect_delay_sec": 0, 00:13:31.771 "fast_io_fail_timeout_sec": 0, 00:13:31.771 "disable_auto_failback": false, 00:13:31.771 "generate_uuids": false, 00:13:31.771 "transport_tos": 0, 00:13:31.771 "nvme_error_stat": false, 00:13:31.771 "rdma_srq_size": 0, 00:13:31.771 "io_path_stat": false, 00:13:31.771 "allow_accel_sequence": false, 00:13:31.771 "rdma_max_cq_size": 0, 00:13:31.771 "rdma_cm_event_timeout_ms": 0, 00:13:31.771 "dhchap_digests": [ 00:13:31.771 "sha256", 00:13:31.771 "sha384", 00:13:31.771 "sha512" 00:13:31.771 ], 00:13:31.771 "dhchap_dhgroups": [ 00:13:31.771 "null", 00:13:31.771 "ffdhe2048", 00:13:31.771 "ffdhe3072", 00:13:31.771 "ffdhe4096", 00:13:31.771 "ffdhe6144", 00:13:31.771 "ffdhe8192" 00:13:31.771 ] 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_nvme_attach_controller", 00:13:31.771 "params": { 00:13:31.771 "name": "nvme0", 00:13:31.771 "trtype": "TCP", 00:13:31.771 "adrfam": "IPv4", 00:13:31.771 "traddr": "10.0.0.3", 00:13:31.771 "trsvcid": "4420", 00:13:31.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:31.771 "prchk_reftag": false, 00:13:31.771 "prchk_guard": false, 00:13:31.771 "ctrlr_loss_timeout_sec": 0, 00:13:31.771 "reconnect_delay_sec": 0, 00:13:31.771 "fast_io_fail_timeout_sec": 0, 00:13:31.771 "psk": "key0", 00:13:31.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:31.771 "hdgst": false, 00:13:31.771 "ddgst": false, 00:13:31.771 "multipath": "multipath" 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_nvme_set_hotplug", 00:13:31.771 "params": { 00:13:31.771 "period_us": 100000, 00:13:31.771 "enable": false 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_enable_histogram", 00:13:31.771 "params": { 00:13:31.771 "name": "nvme0n1", 00:13:31.771 "enable": true 00:13:31.771 } 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "method": "bdev_wait_for_examine" 00:13:31.771 } 00:13:31.771 ] 00:13:31.771 }, 00:13:31.771 { 00:13:31.771 "subsystem": "nbd", 00:13:31.771 "config": [] 00:13:31.771 } 00:13:31.771 ] 00:13:31.771 }' 00:13:31.771 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.771 10:56:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:31.771 [2024-12-09 10:56:24.874649] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:31.771 [2024-12-09 10:56:24.874775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72429 ] 00:13:32.031 [2024-12-09 10:56:25.025314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.031 [2024-12-09 10:56:25.069665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.031 [2024-12-09 10:56:25.191331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.289 [2024-12-09 10:56:25.234267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:32.548 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.548 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:32.548 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:13:32.548 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:13:32.807 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.807 10:56:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:33.065 Running I/O for 1 seconds... 00:13:34.004 6720.00 IOPS, 26.25 MiB/s 00:13:34.004 Latency(us) 00:13:34.004 [2024-12-09T10:56:27.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.004 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:34.004 Verification LBA range: start 0x0 length 0x2000 00:13:34.004 nvme0n1 : 1.01 6779.62 26.48 0.00 0.00 18761.81 3548.67 17171.00 00:13:34.004 [2024-12-09T10:56:27.183Z] =================================================================================================================== 00:13:34.004 [2024-12-09T10:56:27.183Z] Total : 6779.62 26.48 0.00 0.00 18761.81 3548.67 17171.00 00:13:34.004 { 00:13:34.004 "results": [ 00:13:34.004 { 00:13:34.004 "job": "nvme0n1", 00:13:34.004 "core_mask": "0x2", 00:13:34.004 "workload": "verify", 00:13:34.004 "status": "finished", 00:13:34.004 "verify_range": { 00:13:34.004 "start": 0, 00:13:34.004 "length": 8192 00:13:34.004 }, 00:13:34.004 "queue_depth": 128, 00:13:34.004 "io_size": 4096, 00:13:34.004 "runtime": 1.010086, 00:13:34.004 "iops": 6779.620745164273, 00:13:34.004 "mibps": 26.482893535797942, 00:13:34.004 "io_failed": 0, 00:13:34.004 "io_timeout": 0, 00:13:34.004 "avg_latency_us": 18761.811957719463, 00:13:34.004 "min_latency_us": 3548.6742358078604, 00:13:34.004 "max_latency_us": 17171.004366812227 00:13:34.004 } 00:13:34.004 ], 00:13:34.004 "core_count": 1 00:13:34.004 } 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:34.004 nvmf_trace.0 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72429 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72429 ']' 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72429 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.004 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72429 00:13:34.263 killing process with pid 72429 00:13:34.263 Received shutdown signal, test time was about 1.000000 seconds 00:13:34.263 00:13:34.263 Latency(us) 00:13:34.263 [2024-12-09T10:56:27.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.263 [2024-12-09T10:56:27.442Z] =================================================================================================================== 00:13:34.263 [2024-12-09T10:56:27.442Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72429' 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72429 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72429 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:34.263 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:34.523 rmmod nvme_tcp 00:13:34.523 rmmod nvme_fabrics 00:13:34.523 rmmod nvme_keyring 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72397 ']' 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72397 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72397 ']' 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72397 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72397 00:13:34.523 killing process with pid 72397 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72397' 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72397 00:13:34.523 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72397 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:34.781 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:34.782 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:34.782 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:34.782 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:35.040 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.040 10:56:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Xp689IywgR /tmp/tmp.sxsC766G4v /tmp/tmp.ltsKTZPN6i 00:13:35.040 00:13:35.040 real 1m23.746s 00:13:35.040 user 2m14.140s 00:13:35.040 sys 0m25.579s 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:35.040 ************************************ 00:13:35.040 END TEST nvmf_tls 00:13:35.040 ************************************ 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:35.040 ************************************ 00:13:35.040 START TEST nvmf_fips 00:13:35.040 ************************************ 00:13:35.040 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:35.300 * Looking for test storage... 00:13:35.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:35.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.300 --rc genhtml_branch_coverage=1 00:13:35.300 --rc genhtml_function_coverage=1 00:13:35.300 --rc genhtml_legend=1 00:13:35.300 --rc geninfo_all_blocks=1 00:13:35.300 --rc geninfo_unexecuted_blocks=1 00:13:35.300 00:13:35.300 ' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:35.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.300 --rc genhtml_branch_coverage=1 00:13:35.300 --rc genhtml_function_coverage=1 00:13:35.300 --rc genhtml_legend=1 00:13:35.300 --rc geninfo_all_blocks=1 00:13:35.300 --rc geninfo_unexecuted_blocks=1 00:13:35.300 00:13:35.300 ' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:35.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.300 --rc genhtml_branch_coverage=1 00:13:35.300 --rc genhtml_function_coverage=1 00:13:35.300 --rc genhtml_legend=1 00:13:35.300 --rc geninfo_all_blocks=1 00:13:35.300 --rc geninfo_unexecuted_blocks=1 00:13:35.300 00:13:35.300 ' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:35.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.300 --rc genhtml_branch_coverage=1 00:13:35.300 --rc genhtml_function_coverage=1 00:13:35.300 --rc genhtml_legend=1 00:13:35.300 --rc geninfo_all_blocks=1 00:13:35.300 --rc geninfo_unexecuted_blocks=1 00:13:35.300 00:13:35.300 ' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.300 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:13:35.301 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:13:35.561 Error setting digest 00:13:35.561 40023F56EF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:35.561 40023F56EF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:35.561 Cannot find device "nvmf_init_br" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:35.561 Cannot find device "nvmf_init_br2" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:35.561 Cannot find device "nvmf_tgt_br" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.561 Cannot find device "nvmf_tgt_br2" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:35.561 Cannot find device "nvmf_init_br" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:35.561 Cannot find device "nvmf_init_br2" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:35.561 Cannot find device "nvmf_tgt_br" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:35.561 Cannot find device "nvmf_tgt_br2" 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:13:35.561 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:35.821 Cannot find device "nvmf_br" 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:35.821 Cannot find device "nvmf_init_if" 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:35.821 Cannot find device "nvmf_init_if2" 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:35.821 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.821 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:35.821 00:13:35.821 --- 10.0.0.3 ping statistics --- 00:13:35.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.821 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:35.821 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:35.821 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:13:35.821 00:13:35.821 --- 10.0.0.4 ping statistics --- 00:13:35.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.821 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:13:35.821 00:13:35.821 --- 10.0.0.1 ping statistics --- 00:13:35.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.821 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:35.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:13:35.821 00:13:35.821 --- 10.0.0.2 ping statistics --- 00:13:35.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.821 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72742 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72742 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72742 ']' 00:13:35.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.821 10:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:36.080 [2024-12-09 10:56:29.037270] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:36.080 [2024-12-09 10:56:29.037329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.080 [2024-12-09 10:56:29.188022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.081 [2024-12-09 10:56:29.231439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.081 [2024-12-09 10:56:29.231478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.081 [2024-12-09 10:56:29.231484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.081 [2024-12-09 10:56:29.231488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.081 [2024-12-09 10:56:29.231492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.081 [2024-12-09 10:56:29.231773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.339 [2024-12-09 10:56:29.272125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.huf 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.huf 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.huf 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.huf 00:13:36.907 10:56:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:37.166 [2024-12-09 10:56:30.136594] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.166 [2024-12-09 10:56:30.152525] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:37.166 [2024-12-09 10:56:30.152691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:37.166 malloc0 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72782 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72782 /var/tmp/bdevperf.sock 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 72782 ']' 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.166 10:56:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:37.166 [2024-12-09 10:56:30.291301] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:37.166 [2024-12-09 10:56:30.291441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72782 ] 00:13:37.425 [2024-12-09 10:56:30.426085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.425 [2024-12-09 10:56:30.481233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.425 [2024-12-09 10:56:30.523139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.363 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.363 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:13:38.363 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.huf 00:13:38.363 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:38.622 [2024-12-09 10:56:31.600610] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:38.622 TLSTESTn1 00:13:38.622 10:56:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:38.881 Running I/O for 10 seconds... 00:13:40.757 5685.00 IOPS, 22.21 MiB/s [2024-12-09T10:56:34.913Z] 5716.50 IOPS, 22.33 MiB/s [2024-12-09T10:56:35.931Z] 5929.00 IOPS, 23.16 MiB/s [2024-12-09T10:56:36.868Z] 5807.75 IOPS, 22.69 MiB/s [2024-12-09T10:56:37.805Z] 5701.20 IOPS, 22.27 MiB/s [2024-12-09T10:56:39.180Z] 5631.67 IOPS, 22.00 MiB/s [2024-12-09T10:56:40.115Z] 5608.00 IOPS, 21.91 MiB/s [2024-12-09T10:56:41.052Z] 5574.50 IOPS, 21.78 MiB/s [2024-12-09T10:56:41.988Z] 5530.78 IOPS, 21.60 MiB/s [2024-12-09T10:56:41.988Z] 5506.20 IOPS, 21.51 MiB/s 00:13:48.809 Latency(us) 00:13:48.809 [2024-12-09T10:56:41.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.809 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:48.809 Verification LBA range: start 0x0 length 0x2000 00:13:48.809 TLSTESTn1 : 10.01 5511.56 21.53 0.00 0.00 23184.99 4607.55 18773.63 00:13:48.809 [2024-12-09T10:56:41.988Z] =================================================================================================================== 00:13:48.809 [2024-12-09T10:56:41.988Z] Total : 5511.56 21.53 0.00 0.00 23184.99 4607.55 18773.63 00:13:48.809 { 00:13:48.809 "results": [ 00:13:48.809 { 00:13:48.809 "job": "TLSTESTn1", 00:13:48.809 "core_mask": "0x4", 00:13:48.809 "workload": "verify", 00:13:48.809 "status": "finished", 00:13:48.809 "verify_range": { 00:13:48.809 "start": 0, 00:13:48.809 "length": 8192 00:13:48.809 }, 00:13:48.809 "queue_depth": 128, 00:13:48.809 "io_size": 4096, 00:13:48.809 "runtime": 10.012963, 00:13:48.809 "iops": 5511.555370772867, 00:13:48.809 "mibps": 21.52951316708151, 00:13:48.809 "io_failed": 0, 00:13:48.809 "io_timeout": 0, 00:13:48.809 "avg_latency_us": 23184.99149355075, 00:13:48.809 "min_latency_us": 4607.552838427948, 00:13:48.809 "max_latency_us": 18773.631441048034 00:13:48.809 } 00:13:48.809 ], 00:13:48.809 "core_count": 1 00:13:48.809 } 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:48.809 nvmf_trace.0 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72782 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72782 ']' 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72782 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72782 00:13:48.809 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:49.068 killing process with pid 72782 00:13:49.068 Received shutdown signal, test time was about 10.000000 seconds 00:13:49.068 00:13:49.068 Latency(us) 00:13:49.068 [2024-12-09T10:56:42.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.068 [2024-12-09T10:56:42.247Z] =================================================================================================================== 00:13:49.068 [2024-12-09T10:56:42.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:49.068 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:49.068 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72782' 00:13:49.068 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72782 00:13:49.068 10:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72782 00:13:49.068 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:13:49.068 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.068 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.328 rmmod nvme_tcp 00:13:49.328 rmmod nvme_fabrics 00:13:49.328 rmmod nvme_keyring 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72742 ']' 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72742 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 72742 ']' 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 72742 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72742 00:13:49.328 killing process with pid 72742 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72742' 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 72742 00:13:49.328 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 72742 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:49.587 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.huf 00:13:49.846 00:13:49.846 real 0m14.767s 00:13:49.846 user 0m20.633s 00:13:49.846 sys 0m5.483s 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:13:49.846 ************************************ 00:13:49.846 END TEST nvmf_fips 00:13:49.846 ************************************ 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.846 ************************************ 00:13:49.846 START TEST nvmf_control_msg_list 00:13:49.846 ************************************ 00:13:49.846 10:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:13:50.106 * Looking for test storage... 00:13:50.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:50.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.106 --rc genhtml_branch_coverage=1 00:13:50.106 --rc genhtml_function_coverage=1 00:13:50.106 --rc genhtml_legend=1 00:13:50.106 --rc geninfo_all_blocks=1 00:13:50.106 --rc geninfo_unexecuted_blocks=1 00:13:50.106 00:13:50.106 ' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:50.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.106 --rc genhtml_branch_coverage=1 00:13:50.106 --rc genhtml_function_coverage=1 00:13:50.106 --rc genhtml_legend=1 00:13:50.106 --rc geninfo_all_blocks=1 00:13:50.106 --rc geninfo_unexecuted_blocks=1 00:13:50.106 00:13:50.106 ' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:50.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.106 --rc genhtml_branch_coverage=1 00:13:50.106 --rc genhtml_function_coverage=1 00:13:50.106 --rc genhtml_legend=1 00:13:50.106 --rc geninfo_all_blocks=1 00:13:50.106 --rc geninfo_unexecuted_blocks=1 00:13:50.106 00:13:50.106 ' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:50.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.106 --rc genhtml_branch_coverage=1 00:13:50.106 --rc genhtml_function_coverage=1 00:13:50.106 --rc genhtml_legend=1 00:13:50.106 --rc geninfo_all_blocks=1 00:13:50.106 --rc geninfo_unexecuted_blocks=1 00:13:50.106 00:13:50.106 ' 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.106 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.107 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:50.107 Cannot find device "nvmf_init_br" 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:50.107 Cannot find device "nvmf_init_br2" 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:13:50.107 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:50.366 Cannot find device "nvmf_tgt_br" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.366 Cannot find device "nvmf_tgt_br2" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:50.366 Cannot find device "nvmf_init_br" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:50.366 Cannot find device "nvmf_init_br2" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:50.366 Cannot find device "nvmf_tgt_br" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:50.366 Cannot find device "nvmf_tgt_br2" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:50.366 Cannot find device "nvmf_br" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:50.366 Cannot find device "nvmf_init_if" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:50.366 Cannot find device "nvmf_init_if2" 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.366 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:50.367 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:50.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:50.626 00:13:50.626 --- 10.0.0.3 ping statistics --- 00:13:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.626 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:50.626 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:50.626 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:13:50.626 00:13:50.626 --- 10.0.0.4 ping statistics --- 00:13:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.626 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:13:50.626 00:13:50.626 --- 10.0.0.1 ping statistics --- 00:13:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.626 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:50.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:50.626 00:13:50.626 --- 10.0.0.2 ping statistics --- 00:13:50.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.626 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73174 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73174 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73174 ']' 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.626 10:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:50.626 [2024-12-09 10:56:43.757998] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:50.626 [2024-12-09 10:56:43.758079] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.885 [2024-12-09 10:56:43.912517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.885 [2024-12-09 10:56:43.968280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.885 [2024-12-09 10:56:43.968344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.885 [2024-12-09 10:56:43.968354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.885 [2024-12-09 10:56:43.968362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.885 [2024-12-09 10:56:43.968368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.885 [2024-12-09 10:56:43.968652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.885 [2024-12-09 10:56:44.011956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:51.823 [2024-12-09 10:56:44.723963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:51.823 Malloc0 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:51.823 [2024-12-09 10:56:44.776813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73204 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73205 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73206 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:51.823 10:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73204 00:13:51.823 [2024-12-09 10:56:44.966766] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:51.823 [2024-12-09 10:56:44.986707] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:51.823 [2024-12-09 10:56:44.996757] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:53.198 Initializing NVMe Controllers 00:13:53.198 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:53.198 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:13:53.198 Initialization complete. Launching workers. 00:13:53.198 ======================================================== 00:13:53.198 Latency(us) 00:13:53.198 Device Information : IOPS MiB/s Average min max 00:13:53.198 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5024.98 19.63 198.76 86.90 440.66 00:13:53.198 ======================================================== 00:13:53.198 Total : 5024.98 19.63 198.76 86.90 440.66 00:13:53.198 00:13:53.198 Initializing NVMe Controllers 00:13:53.198 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:53.198 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:13:53.198 Initialization complete. Launching workers. 00:13:53.198 ======================================================== 00:13:53.198 Latency(us) 00:13:53.198 Device Information : IOPS MiB/s Average min max 00:13:53.198 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4988.00 19.48 200.23 91.45 443.11 00:13:53.198 ======================================================== 00:13:53.198 Total : 4988.00 19.48 200.23 91.45 443.11 00:13:53.198 00:13:53.198 Initializing NVMe Controllers 00:13:53.198 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:53.198 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:13:53.198 Initialization complete. Launching workers. 00:13:53.198 ======================================================== 00:13:53.198 Latency(us) 00:13:53.198 Device Information : IOPS MiB/s Average min max 00:13:53.198 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5042.94 19.70 198.03 73.39 355.97 00:13:53.198 ======================================================== 00:13:53.198 Total : 5042.94 19.70 198.03 73.39 355.97 00:13:53.198 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73205 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73206 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:53.198 rmmod nvme_tcp 00:13:53.198 rmmod nvme_fabrics 00:13:53.198 rmmod nvme_keyring 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73174 ']' 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73174 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73174 ']' 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73174 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73174 00:13:53.198 killing process with pid 73174 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73174' 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73174 00:13:53.198 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73174 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:53.458 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:53.717 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:53.717 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:53.717 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.717 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.717 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.717 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:13:53.717 00:13:53.717 real 0m3.752s 00:13:53.717 user 0m5.856s 00:13:53.717 sys 0m1.488s 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:13:53.718 ************************************ 00:13:53.718 END TEST nvmf_control_msg_list 00:13:53.718 ************************************ 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.718 ************************************ 00:13:53.718 START TEST nvmf_wait_for_buf 00:13:53.718 ************************************ 00:13:53.718 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:13:53.718 * Looking for test storage... 00:13:53.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.978 --rc genhtml_branch_coverage=1 00:13:53.978 --rc genhtml_function_coverage=1 00:13:53.978 --rc genhtml_legend=1 00:13:53.978 --rc geninfo_all_blocks=1 00:13:53.978 --rc geninfo_unexecuted_blocks=1 00:13:53.978 00:13:53.978 ' 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.978 --rc genhtml_branch_coverage=1 00:13:53.978 --rc genhtml_function_coverage=1 00:13:53.978 --rc genhtml_legend=1 00:13:53.978 --rc geninfo_all_blocks=1 00:13:53.978 --rc geninfo_unexecuted_blocks=1 00:13:53.978 00:13:53.978 ' 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.978 --rc genhtml_branch_coverage=1 00:13:53.978 --rc genhtml_function_coverage=1 00:13:53.978 --rc genhtml_legend=1 00:13:53.978 --rc geninfo_all_blocks=1 00:13:53.978 --rc geninfo_unexecuted_blocks=1 00:13:53.978 00:13:53.978 ' 00:13:53.978 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:53.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.979 --rc genhtml_branch_coverage=1 00:13:53.979 --rc genhtml_function_coverage=1 00:13:53.979 --rc genhtml_legend=1 00:13:53.979 --rc geninfo_all_blocks=1 00:13:53.979 --rc geninfo_unexecuted_blocks=1 00:13:53.979 00:13:53.979 ' 00:13:53.979 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:53.979 10:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:53.979 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:53.980 Cannot find device "nvmf_init_br" 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:53.980 Cannot find device "nvmf_init_br2" 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:53.980 Cannot find device "nvmf_tgt_br" 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:53.980 Cannot find device "nvmf_tgt_br2" 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:53.980 Cannot find device "nvmf_init_br" 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:53.980 Cannot find device "nvmf_init_br2" 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:13:53.980 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:54.240 Cannot find device "nvmf_tgt_br" 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:54.240 Cannot find device "nvmf_tgt_br2" 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:54.240 Cannot find device "nvmf_br" 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:54.240 Cannot find device "nvmf_init_if" 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:54.240 Cannot find device "nvmf_init_if2" 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:54.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:54.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:54.240 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:54.502 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:54.502 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:13:54.502 00:13:54.502 --- 10.0.0.3 ping statistics --- 00:13:54.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.502 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:54.502 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:54.502 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:13:54.502 00:13:54.502 --- 10.0.0.4 ping statistics --- 00:13:54.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.502 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:54.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:54.502 00:13:54.502 --- 10.0.0.1 ping statistics --- 00:13:54.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.502 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:54.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:13:54.502 00:13:54.502 --- 10.0.0.2 ping statistics --- 00:13:54.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.502 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73449 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73449 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73449 ']' 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.502 10:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:54.503 [2024-12-09 10:56:47.622959] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:54.503 [2024-12-09 10:56:47.623018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.762 [2024-12-09 10:56:47.773706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.762 [2024-12-09 10:56:47.818495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.762 [2024-12-09 10:56:47.818548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.762 [2024-12-09 10:56:47.818555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.762 [2024-12-09 10:56:47.818559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.762 [2024-12-09 10:56:47.818563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.762 [2024-12-09 10:56:47.818859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.329 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.329 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:13:55.329 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.329 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.329 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.329 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.330 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:13:55.330 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:55.330 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:13:55.330 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.330 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 [2024-12-09 10:56:48.547832] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 Malloc0 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 [2024-12-09 10:56:48.602206] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:55.588 [2024-12-09 10:56:48.626228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.588 10:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:13:55.846 [2024-12-09 10:56:48.820850] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:57.224 Initializing NVMe Controllers 00:13:57.224 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:13:57.224 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:13:57.224 Initialization complete. Launching workers. 00:13:57.224 ======================================================== 00:13:57.224 Latency(us) 00:13:57.224 Device Information : IOPS MiB/s Average min max 00:13:57.224 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.52 62.44 8007.43 6016.67 10012.80 00:13:57.224 ======================================================== 00:13:57.224 Total : 499.52 62.44 8007.43 6016.67 10012.80 00:13:57.224 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.224 rmmod nvme_tcp 00:13:57.224 rmmod nvme_fabrics 00:13:57.224 rmmod nvme_keyring 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:13:57.224 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73449 ']' 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73449 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73449 ']' 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73449 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73449 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.225 killing process with pid 73449 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73449' 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73449 00:13:57.225 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73449 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:57.484 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.742 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.743 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.743 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:13:57.743 00:13:57.743 real 0m4.098s 00:13:57.743 user 0m3.347s 00:13:57.743 sys 0m0.968s 00:13:57.743 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.743 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:13:57.743 ************************************ 00:13:57.743 END TEST nvmf_wait_for_buf 00:13:57.743 ************************************ 00:13:58.002 10:56:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # '[' 0 -eq 1 ']' 00:13:58.002 10:56:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # [[ virt == phy ]] 00:13:58.002 10:56:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@70 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:58.002 10:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.002 10:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.003 10:56:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.003 ************************************ 00:13:58.003 START TEST nvmf_nsid 00:13:58.003 ************************************ 00:13:58.003 10:56:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:13:58.003 * Looking for test storage... 00:13:58.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:58.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.003 --rc genhtml_branch_coverage=1 00:13:58.003 --rc genhtml_function_coverage=1 00:13:58.003 --rc genhtml_legend=1 00:13:58.003 --rc geninfo_all_blocks=1 00:13:58.003 --rc geninfo_unexecuted_blocks=1 00:13:58.003 00:13:58.003 ' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:58.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.003 --rc genhtml_branch_coverage=1 00:13:58.003 --rc genhtml_function_coverage=1 00:13:58.003 --rc genhtml_legend=1 00:13:58.003 --rc geninfo_all_blocks=1 00:13:58.003 --rc geninfo_unexecuted_blocks=1 00:13:58.003 00:13:58.003 ' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:58.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.003 --rc genhtml_branch_coverage=1 00:13:58.003 --rc genhtml_function_coverage=1 00:13:58.003 --rc genhtml_legend=1 00:13:58.003 --rc geninfo_all_blocks=1 00:13:58.003 --rc geninfo_unexecuted_blocks=1 00:13:58.003 00:13:58.003 ' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:58.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.003 --rc genhtml_branch_coverage=1 00:13:58.003 --rc genhtml_function_coverage=1 00:13:58.003 --rc genhtml_legend=1 00:13:58.003 --rc geninfo_all_blocks=1 00:13:58.003 --rc geninfo_unexecuted_blocks=1 00:13:58.003 00:13:58.003 ' 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.003 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.263 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:58.264 Cannot find device "nvmf_init_br" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:58.264 Cannot find device "nvmf_init_br2" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:58.264 Cannot find device "nvmf_tgt_br" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:58.264 Cannot find device "nvmf_tgt_br2" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:58.264 Cannot find device "nvmf_init_br" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:58.264 Cannot find device "nvmf_init_br2" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:58.264 Cannot find device "nvmf_tgt_br" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:58.264 Cannot find device "nvmf_tgt_br2" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:58.264 Cannot find device "nvmf_br" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:58.264 Cannot find device "nvmf_init_if" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:58.264 Cannot find device "nvmf_init_if2" 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:58.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:58.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:58.264 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:58.524 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:58.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:58.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:13:58.525 00:13:58.525 --- 10.0.0.3 ping statistics --- 00:13:58.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.525 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:58.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:58.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:13:58.525 00:13:58.525 --- 10.0.0.4 ping statistics --- 00:13:58.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.525 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:58.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:13:58.525 00:13:58.525 --- 10.0.0.1 ping statistics --- 00:13:58.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.525 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:58.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:13:58.525 00:13:58.525 --- 10.0.0.2 ping statistics --- 00:13:58.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.525 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73719 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73719 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73719 ']' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.525 10:56:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:58.784 [2024-12-09 10:56:51.720033] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:58.784 [2024-12-09 10:56:51.720092] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.784 [2024-12-09 10:56:51.870210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.784 [2024-12-09 10:56:51.913931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.784 [2024-12-09 10:56:51.913976] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.784 [2024-12-09 10:56:51.913982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.784 [2024-12-09 10:56:51.913987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.784 [2024-12-09 10:56:51.913991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.784 [2024-12-09 10:56:51.914255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.784 [2024-12-09 10:56:51.954460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73746 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:13:59.722 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=5303d4e9-ec51-47eb-9e62-0198544f3d4d 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5082cb39-3048-48cd-bc25-7f5906a92c50 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=70756da7-339b-43e2-aa50-4b1e04b92f12 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 null0 00:13:59.723 null1 00:13:59.723 [2024-12-09 10:56:52.669336] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:13:59.723 [2024-12-09 10:56:52.669393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73746 ] 00:13:59.723 null2 00:13:59.723 [2024-12-09 10:56:52.675571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.723 [2024-12-09 10:56:52.699648] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73746 /var/tmp/tgt2.sock 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 73746 ']' 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.723 10:56:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:13:59.723 [2024-12-09 10:56:52.805307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.723 [2024-12-09 10:56:52.859253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.982 [2024-12-09 10:56:52.916288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.982 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.982 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:13:59.982 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:00.241 [2024-12-09 10:56:53.409289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.500 [2024-12-09 10:56:53.425326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:00.500 nvme0n1 nvme0n2 00:14:00.500 nvme1n1 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:14:00.500 10:56:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 5303d4e9-ec51-47eb-9e62-0198544f3d4d 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5303d4e9ec5147eb9e620198544f3d4d 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5303D4E9EC5147EB9E620198544F3D4D 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 5303D4E9EC5147EB9E620198544F3D4D == \5\3\0\3\D\4\E\9\E\C\5\1\4\7\E\B\9\E\6\2\0\1\9\8\5\4\4\F\3\D\4\D ]] 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5082cb39-3048-48cd-bc25-7f5906a92c50 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5082cb39304848cdbc257f5906a92c50 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5082CB39304848CDBC257F5906A92C50 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5082CB39304848CDBC257F5906A92C50 == \5\0\8\2\C\B\3\9\3\0\4\8\4\8\C\D\B\C\2\5\7\F\5\9\0\6\A\9\2\C\5\0 ]] 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 70756da7-339b-43e2-aa50-4b1e04b92f12 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=70756da7339b43e2aa504b1e04b92f12 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 70756DA7339B43E2AA504B1E04B92F12 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 70756DA7339B43E2AA504B1E04B92F12 == \7\0\7\5\6\D\A\7\3\3\9\B\4\3\E\2\A\A\5\0\4\B\1\E\0\4\B\9\2\F\1\2 ]] 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73746 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73746 ']' 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73746 00:14:01.878 10:56:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73746 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:01.878 killing process with pid 73746 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73746' 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73746 00:14:01.878 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73746 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:02.445 rmmod nvme_tcp 00:14:02.445 rmmod nvme_fabrics 00:14:02.445 rmmod nvme_keyring 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73719 ']' 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73719 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 73719 ']' 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 73719 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73719 00:14:02.445 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.446 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.446 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73719' 00:14:02.446 killing process with pid 73719 00:14:02.446 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 73719 00:14:02.446 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 73719 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:02.705 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:02.965 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:02.965 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:02.965 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:02.965 10:56:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:02.965 00:14:02.965 real 0m5.137s 00:14:02.965 user 0m7.142s 00:14:02.965 sys 0m1.686s 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:02.965 ************************************ 00:14:02.965 END TEST nvmf_nsid 00:14:02.965 ************************************ 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@72 -- # trap - SIGINT SIGTERM EXIT 00:14:02.965 00:14:02.965 real 4m34.693s 00:14:02.965 user 9m13.765s 00:14:02.965 sys 1m3.746s 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:02.965 10:56:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.965 ************************************ 00:14:02.965 END TEST nvmf_target_extra 00:14:02.965 ************************************ 00:14:03.225 10:56:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:03.225 10:56:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.225 10:56:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.225 10:56:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.225 ************************************ 00:14:03.225 START TEST nvmf_host 00:14:03.225 ************************************ 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:03.225 * Looking for test storage... 00:14:03.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.225 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.485 --rc genhtml_branch_coverage=1 00:14:03.485 --rc genhtml_function_coverage=1 00:14:03.485 --rc genhtml_legend=1 00:14:03.485 --rc geninfo_all_blocks=1 00:14:03.485 --rc geninfo_unexecuted_blocks=1 00:14:03.485 00:14:03.485 ' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.485 --rc genhtml_branch_coverage=1 00:14:03.485 --rc genhtml_function_coverage=1 00:14:03.485 --rc genhtml_legend=1 00:14:03.485 --rc geninfo_all_blocks=1 00:14:03.485 --rc geninfo_unexecuted_blocks=1 00:14:03.485 00:14:03.485 ' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.485 --rc genhtml_branch_coverage=1 00:14:03.485 --rc genhtml_function_coverage=1 00:14:03.485 --rc genhtml_legend=1 00:14:03.485 --rc geninfo_all_blocks=1 00:14:03.485 --rc geninfo_unexecuted_blocks=1 00:14:03.485 00:14:03.485 ' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:03.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.485 --rc genhtml_branch_coverage=1 00:14:03.485 --rc genhtml_function_coverage=1 00:14:03.485 --rc genhtml_legend=1 00:14:03.485 --rc geninfo_all_blocks=1 00:14:03.485 --rc geninfo_unexecuted_blocks=1 00:14:03.485 00:14:03.485 ' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.485 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.485 10:56:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:03.486 ************************************ 00:14:03.486 START TEST nvmf_identify 00:14:03.486 ************************************ 00:14:03.486 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:03.486 * Looking for test storage... 00:14:03.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:03.486 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:03.486 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:03.486 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.746 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:03.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.746 --rc genhtml_branch_coverage=1 00:14:03.746 --rc genhtml_function_coverage=1 00:14:03.746 --rc genhtml_legend=1 00:14:03.746 --rc geninfo_all_blocks=1 00:14:03.746 --rc geninfo_unexecuted_blocks=1 00:14:03.746 00:14:03.746 ' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:03.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.747 --rc genhtml_branch_coverage=1 00:14:03.747 --rc genhtml_function_coverage=1 00:14:03.747 --rc genhtml_legend=1 00:14:03.747 --rc geninfo_all_blocks=1 00:14:03.747 --rc geninfo_unexecuted_blocks=1 00:14:03.747 00:14:03.747 ' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:03.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.747 --rc genhtml_branch_coverage=1 00:14:03.747 --rc genhtml_function_coverage=1 00:14:03.747 --rc genhtml_legend=1 00:14:03.747 --rc geninfo_all_blocks=1 00:14:03.747 --rc geninfo_unexecuted_blocks=1 00:14:03.747 00:14:03.747 ' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:03.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.747 --rc genhtml_branch_coverage=1 00:14:03.747 --rc genhtml_function_coverage=1 00:14:03.747 --rc genhtml_legend=1 00:14:03.747 --rc geninfo_all_blocks=1 00:14:03.747 --rc geninfo_unexecuted_blocks=1 00:14:03.747 00:14:03.747 ' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.747 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:03.747 Cannot find device "nvmf_init_br" 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:03.747 Cannot find device "nvmf_init_br2" 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:03.747 Cannot find device "nvmf_tgt_br" 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:03.747 Cannot find device "nvmf_tgt_br2" 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:03.747 Cannot find device "nvmf_init_br" 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:03.747 Cannot find device "nvmf_init_br2" 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:03.747 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:03.747 Cannot find device "nvmf_tgt_br" 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:03.748 Cannot find device "nvmf_tgt_br2" 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:03.748 Cannot find device "nvmf_br" 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:03.748 Cannot find device "nvmf_init_if" 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:03.748 Cannot find device "nvmf_init_if2" 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:03.748 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.007 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:04.007 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.007 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:04.007 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.007 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.007 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:04.008 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.008 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.008 10:56:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:04.008 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.008 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.125 ms 00:14:04.008 00:14:04.008 --- 10.0.0.3 ping statistics --- 00:14:04.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.008 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:04.008 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:04.008 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:14:04.008 00:14:04.008 --- 10.0.0.4 ping statistics --- 00:14:04.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.008 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:14:04.008 00:14:04.008 --- 10.0.0.1 ping statistics --- 00:14:04.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.008 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:04.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:14:04.008 00:14:04.008 --- 10.0.0.2 ping statistics --- 00:14:04.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.008 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:04.008 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74102 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74102 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74102 ']' 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.268 10:56:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:04.268 [2024-12-09 10:56:57.269741] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:04.268 [2024-12-09 10:56:57.269826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.268 [2024-12-09 10:56:57.423557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.528 [2024-12-09 10:56:57.473176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.528 [2024-12-09 10:56:57.473224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.528 [2024-12-09 10:56:57.473231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.528 [2024-12-09 10:56:57.473236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.528 [2024-12-09 10:56:57.473239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.528 [2024-12-09 10:56:57.474376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.528 [2024-12-09 10:56:57.474433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.528 [2024-12-09 10:56:57.474538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.528 [2024-12-09 10:56:57.474543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.528 [2024-12-09 10:56:57.516980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.097 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.097 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:14:05.097 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.097 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.097 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 [2024-12-09 10:56:58.111897] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 Malloc0 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 [2024-12-09 10:56:58.227445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.098 [ 00:14:05.098 { 00:14:05.098 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:05.098 "subtype": "Discovery", 00:14:05.098 "listen_addresses": [ 00:14:05.098 { 00:14:05.098 "trtype": "TCP", 00:14:05.098 "adrfam": "IPv4", 00:14:05.098 "traddr": "10.0.0.3", 00:14:05.098 "trsvcid": "4420" 00:14:05.098 } 00:14:05.098 ], 00:14:05.098 "allow_any_host": true, 00:14:05.098 "hosts": [] 00:14:05.098 }, 00:14:05.098 { 00:14:05.098 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.098 "subtype": "NVMe", 00:14:05.098 "listen_addresses": [ 00:14:05.098 { 00:14:05.098 "trtype": "TCP", 00:14:05.098 "adrfam": "IPv4", 00:14:05.098 "traddr": "10.0.0.3", 00:14:05.098 "trsvcid": "4420" 00:14:05.098 } 00:14:05.098 ], 00:14:05.098 "allow_any_host": true, 00:14:05.098 "hosts": [], 00:14:05.098 "serial_number": "SPDK00000000000001", 00:14:05.098 "model_number": "SPDK bdev Controller", 00:14:05.098 "max_namespaces": 32, 00:14:05.098 "min_cntlid": 1, 00:14:05.098 "max_cntlid": 65519, 00:14:05.098 "namespaces": [ 00:14:05.098 { 00:14:05.098 "nsid": 1, 00:14:05.098 "bdev_name": "Malloc0", 00:14:05.098 "name": "Malloc0", 00:14:05.098 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:05.098 "eui64": "ABCDEF0123456789", 00:14:05.098 "uuid": "eca1137d-3c11-486d-a19b-7e38b150cd65" 00:14:05.098 } 00:14:05.098 ] 00:14:05.098 } 00:14:05.098 ] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.098 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:05.360 [2024-12-09 10:56:58.293737] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:05.360 [2024-12-09 10:56:58.293792] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74133 ] 00:14:05.360 [2024-12-09 10:56:58.444814] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:05.360 [2024-12-09 10:56:58.444862] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:05.361 [2024-12-09 10:56:58.444867] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:05.361 [2024-12-09 10:56:58.444880] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:05.361 [2024-12-09 10:56:58.444889] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:05.361 [2024-12-09 10:56:58.445144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:05.361 [2024-12-09 10:56:58.445180] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e95750 0 00:14:05.361 [2024-12-09 10:56:58.451786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:05.361 [2024-12-09 10:56:58.451801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:05.361 [2024-12-09 10:56:58.451805] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:05.361 [2024-12-09 10:56:58.451807] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:05.361 [2024-12-09 10:56:58.451839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.451843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.451846] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.451857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:05.361 [2024-12-09 10:56:58.451879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.459757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.459770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.459773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.459777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.459784] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:05.361 [2024-12-09 10:56:58.459790] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:05.361 [2024-12-09 10:56:58.459794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:05.361 [2024-12-09 10:56:58.459808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.459811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.459813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.459820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.459843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.459923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.459928] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.459931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.459934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.459938] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:05.361 [2024-12-09 10:56:58.459943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:05.361 [2024-12-09 10:56:58.459949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.459952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.459954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.459960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.459972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.460024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.460029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.460032] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460035] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.460039] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:05.361 [2024-12-09 10:56:58.460045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:05.361 [2024-12-09 10:56:58.460050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.460061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.460072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.460114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.460119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.460121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460124] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.460129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:05.361 [2024-12-09 10:56:58.460136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.460147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.460158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.460197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.460202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.460205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.460211] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:05.361 [2024-12-09 10:56:58.460215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:05.361 [2024-12-09 10:56:58.460221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:05.361 [2024-12-09 10:56:58.460326] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:05.361 [2024-12-09 10:56:58.460329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:05.361 [2024-12-09 10:56:58.460337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.460348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.460360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.460403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.460408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.460410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.460416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:05.361 [2024-12-09 10:56:58.460423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.460434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.460445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.460494] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.361 [2024-12-09 10:56:58.460500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.361 [2024-12-09 10:56:58.460502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.361 [2024-12-09 10:56:58.460508] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:05.361 [2024-12-09 10:56:58.460512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:05.361 [2024-12-09 10:56:58.460517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:05.361 [2024-12-09 10:56:58.460525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:05.361 [2024-12-09 10:56:58.460533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.361 [2024-12-09 10:56:58.460541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.361 [2024-12-09 10:56:58.460552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.361 [2024-12-09 10:56:58.460638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.361 [2024-12-09 10:56:58.460643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.361 [2024-12-09 10:56:58.460646] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.361 [2024-12-09 10:56:58.460649] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e95750): datao=0, datal=4096, cccid=0 00:14:05.361 [2024-12-09 10:56:58.460653] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef9740) on tqpair(0x1e95750): expected_datao=0, payload_size=4096 00:14:05.362 [2024-12-09 10:56:58.460656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460663] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460666] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.362 [2024-12-09 10:56:58.460677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.362 [2024-12-09 10:56:58.460680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460682] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.362 [2024-12-09 10:56:58.460700] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:05.362 [2024-12-09 10:56:58.460703] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:05.362 [2024-12-09 10:56:58.460706] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:05.362 [2024-12-09 10:56:58.460710] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:05.362 [2024-12-09 10:56:58.460715] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:05.362 [2024-12-09 10:56:58.460719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:05.362 [2024-12-09 10:56:58.460724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.362 [2024-12-09 10:56:58.460729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.460739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.362 [2024-12-09 10:56:58.460749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.362 [2024-12-09 10:56:58.460807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.362 [2024-12-09 10:56:58.460812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.362 [2024-12-09 10:56:58.460814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.362 [2024-12-09 10:56:58.460824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460827] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.460834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.362 [2024-12-09 10:56:58.460838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.460846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.362 [2024-12-09 10:56:58.460850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.460859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.362 [2024-12-09 10:56:58.460863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.460872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.362 [2024-12-09 10:56:58.460875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.362 [2024-12-09 10:56:58.460880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.362 [2024-12-09 10:56:58.460884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.460886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.460891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.362 [2024-12-09 10:56:58.460903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9740, cid 0, qid 0 00:14:05.362 [2024-12-09 10:56:58.460907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef98c0, cid 1, qid 0 00:14:05.362 [2024-12-09 10:56:58.460910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9a40, cid 2, qid 0 00:14:05.362 [2024-12-09 10:56:58.460914] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.362 [2024-12-09 10:56:58.460917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9d40, cid 4, qid 0 00:14:05.362 [2024-12-09 10:56:58.461004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.362 [2024-12-09 10:56:58.461008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.362 [2024-12-09 10:56:58.461010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9d40) on tqpair=0x1e95750 00:14:05.362 [2024-12-09 10:56:58.461021] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:05.362 [2024-12-09 10:56:58.461025] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:05.362 [2024-12-09 10:56:58.461032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.461039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.362 [2024-12-09 10:56:58.461049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9d40, cid 4, qid 0 00:14:05.362 [2024-12-09 10:56:58.461100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.362 [2024-12-09 10:56:58.461104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.362 [2024-12-09 10:56:58.461106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461109] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e95750): datao=0, datal=4096, cccid=4 00:14:05.362 [2024-12-09 10:56:58.461111] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef9d40) on tqpair(0x1e95750): expected_datao=0, payload_size=4096 00:14:05.362 [2024-12-09 10:56:58.461114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461119] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461121] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.362 [2024-12-09 10:56:58.461131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.362 [2024-12-09 10:56:58.461133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9d40) on tqpair=0x1e95750 00:14:05.362 [2024-12-09 10:56:58.461144] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:05.362 [2024-12-09 10:56:58.461164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461167] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.461171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.362 [2024-12-09 10:56:58.461176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e95750) 00:14:05.362 [2024-12-09 10:56:58.461185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.362 [2024-12-09 10:56:58.461198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9d40, cid 4, qid 0 00:14:05.362 [2024-12-09 10:56:58.461203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9ec0, cid 5, qid 0 00:14:05.362 [2024-12-09 10:56:58.461292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.362 [2024-12-09 10:56:58.461297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.362 [2024-12-09 10:56:58.461299] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461301] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e95750): datao=0, datal=1024, cccid=4 00:14:05.362 [2024-12-09 10:56:58.461304] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef9d40) on tqpair(0x1e95750): expected_datao=0, payload_size=1024 00:14:05.362 [2024-12-09 10:56:58.461307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461311] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461313] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.362 [2024-12-09 10:56:58.461321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.362 [2024-12-09 10:56:58.461323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9ec0) on tqpair=0x1e95750 00:14:05.362 [2024-12-09 10:56:58.461336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.362 [2024-12-09 10:56:58.461341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.362 [2024-12-09 10:56:58.461343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9d40) on tqpair=0x1e95750 00:14:05.362 [2024-12-09 10:56:58.461353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.362 [2024-12-09 10:56:58.461356] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e95750) 00:14:05.363 [2024-12-09 10:56:58.461360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.363 [2024-12-09 10:56:58.461372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9d40, cid 4, qid 0 00:14:05.363 [2024-12-09 10:56:58.461428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.363 [2024-12-09 10:56:58.461433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.363 [2024-12-09 10:56:58.461435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461437] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e95750): datao=0, datal=3072, cccid=4 00:14:05.363 [2024-12-09 10:56:58.461440] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef9d40) on tqpair(0x1e95750): expected_datao=0, payload_size=3072 00:14:05.363 [2024-12-09 10:56:58.461442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461447] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461449] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.363 [2024-12-09 10:56:58.461459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.363 [2024-12-09 10:56:58.461461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9d40) on tqpair=0x1e95750 00:14:05.363 [2024-12-09 10:56:58.461469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e95750) 00:14:05.363 [2024-12-09 10:56:58.461476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.363 [2024-12-09 10:56:58.461488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9d40, cid 4, qid 0 00:14:05.363 [2024-12-09 10:56:58.461543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.363 [2024-12-09 10:56:58.461547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.363 [2024-12-09 10:56:58.461549] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461552] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e95750): datao=0, datal=8, cccid=4 00:14:05.363 [2024-12-09 10:56:58.461555] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ef9d40) on tqpair(0x1e95750): expected_datao=0, payload_size=8 00:14:05.363 [2024-12-09 10:56:58.461557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461562] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461564] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.363 [2024-12-09 10:56:58.461577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.363 [2024-12-09 10:56:58.461580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.363 [2024-12-09 10:56:58.461582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9d40) on tqpair=0x1e95750 00:14:05.363 ===================================================== 00:14:05.363 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:05.363 ===================================================== 00:14:05.363 Controller Capabilities/Features 00:14:05.363 ================================ 00:14:05.363 Vendor ID: 0000 00:14:05.363 Subsystem Vendor ID: 0000 00:14:05.363 Serial Number: .................... 00:14:05.363 Model Number: ........................................ 00:14:05.363 Firmware Version: 25.01 00:14:05.363 Recommended Arb Burst: 0 00:14:05.363 IEEE OUI Identifier: 00 00 00 00:14:05.363 Multi-path I/O 00:14:05.363 May have multiple subsystem ports: No 00:14:05.363 May have multiple controllers: No 00:14:05.363 Associated with SR-IOV VF: No 00:14:05.363 Max Data Transfer Size: 131072 00:14:05.363 Max Number of Namespaces: 0 00:14:05.363 Max Number of I/O Queues: 1024 00:14:05.363 NVMe Specification Version (VS): 1.3 00:14:05.363 NVMe Specification Version (Identify): 1.3 00:14:05.363 Maximum Queue Entries: 128 00:14:05.363 Contiguous Queues Required: Yes 00:14:05.363 Arbitration Mechanisms Supported 00:14:05.363 Weighted Round Robin: Not Supported 00:14:05.363 Vendor Specific: Not Supported 00:14:05.363 Reset Timeout: 15000 ms 00:14:05.363 Doorbell Stride: 4 bytes 00:14:05.363 NVM Subsystem Reset: Not Supported 00:14:05.363 Command Sets Supported 00:14:05.363 NVM Command Set: Supported 00:14:05.363 Boot Partition: Not Supported 00:14:05.363 Memory Page Size Minimum: 4096 bytes 00:14:05.363 Memory Page Size Maximum: 4096 bytes 00:14:05.363 Persistent Memory Region: Not Supported 00:14:05.363 Optional Asynchronous Events Supported 00:14:05.363 Namespace Attribute Notices: Not Supported 00:14:05.363 Firmware Activation Notices: Not Supported 00:14:05.363 ANA Change Notices: Not Supported 00:14:05.363 PLE Aggregate Log Change Notices: Not Supported 00:14:05.363 LBA Status Info Alert Notices: Not Supported 00:14:05.363 EGE Aggregate Log Change Notices: Not Supported 00:14:05.363 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.363 Zone Descriptor Change Notices: Not Supported 00:14:05.363 Discovery Log Change Notices: Supported 00:14:05.363 Controller Attributes 00:14:05.363 128-bit Host Identifier: Not Supported 00:14:05.363 Non-Operational Permissive Mode: Not Supported 00:14:05.363 NVM Sets: Not Supported 00:14:05.363 Read Recovery Levels: Not Supported 00:14:05.363 Endurance Groups: Not Supported 00:14:05.363 Predictable Latency Mode: Not Supported 00:14:05.363 Traffic Based Keep ALive: Not Supported 00:14:05.363 Namespace Granularity: Not Supported 00:14:05.363 SQ Associations: Not Supported 00:14:05.363 UUID List: Not Supported 00:14:05.363 Multi-Domain Subsystem: Not Supported 00:14:05.363 Fixed Capacity Management: Not Supported 00:14:05.363 Variable Capacity Management: Not Supported 00:14:05.363 Delete Endurance Group: Not Supported 00:14:05.363 Delete NVM Set: Not Supported 00:14:05.363 Extended LBA Formats Supported: Not Supported 00:14:05.363 Flexible Data Placement Supported: Not Supported 00:14:05.363 00:14:05.363 Controller Memory Buffer Support 00:14:05.363 ================================ 00:14:05.363 Supported: No 00:14:05.363 00:14:05.363 Persistent Memory Region Support 00:14:05.363 ================================ 00:14:05.363 Supported: No 00:14:05.363 00:14:05.363 Admin Command Set Attributes 00:14:05.363 ============================ 00:14:05.363 Security Send/Receive: Not Supported 00:14:05.363 Format NVM: Not Supported 00:14:05.363 Firmware Activate/Download: Not Supported 00:14:05.363 Namespace Management: Not Supported 00:14:05.363 Device Self-Test: Not Supported 00:14:05.363 Directives: Not Supported 00:14:05.363 NVMe-MI: Not Supported 00:14:05.363 Virtualization Management: Not Supported 00:14:05.363 Doorbell Buffer Config: Not Supported 00:14:05.363 Get LBA Status Capability: Not Supported 00:14:05.363 Command & Feature Lockdown Capability: Not Supported 00:14:05.363 Abort Command Limit: 1 00:14:05.363 Async Event Request Limit: 4 00:14:05.363 Number of Firmware Slots: N/A 00:14:05.363 Firmware Slot 1 Read-Only: N/A 00:14:05.363 Firmware Activation Without Reset: N/A 00:14:05.363 Multiple Update Detection Support: N/A 00:14:05.363 Firmware Update Granularity: No Information Provided 00:14:05.363 Per-Namespace SMART Log: No 00:14:05.363 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.363 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:05.363 Command Effects Log Page: Not Supported 00:14:05.363 Get Log Page Extended Data: Supported 00:14:05.363 Telemetry Log Pages: Not Supported 00:14:05.363 Persistent Event Log Pages: Not Supported 00:14:05.363 Supported Log Pages Log Page: May Support 00:14:05.363 Commands Supported & Effects Log Page: Not Supported 00:14:05.363 Feature Identifiers & Effects Log Page:May Support 00:14:05.363 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.363 Data Area 4 for Telemetry Log: Not Supported 00:14:05.363 Error Log Page Entries Supported: 128 00:14:05.363 Keep Alive: Not Supported 00:14:05.363 00:14:05.363 NVM Command Set Attributes 00:14:05.363 ========================== 00:14:05.363 Submission Queue Entry Size 00:14:05.363 Max: 1 00:14:05.363 Min: 1 00:14:05.363 Completion Queue Entry Size 00:14:05.363 Max: 1 00:14:05.363 Min: 1 00:14:05.363 Number of Namespaces: 0 00:14:05.363 Compare Command: Not Supported 00:14:05.363 Write Uncorrectable Command: Not Supported 00:14:05.363 Dataset Management Command: Not Supported 00:14:05.363 Write Zeroes Command: Not Supported 00:14:05.363 Set Features Save Field: Not Supported 00:14:05.363 Reservations: Not Supported 00:14:05.363 Timestamp: Not Supported 00:14:05.363 Copy: Not Supported 00:14:05.363 Volatile Write Cache: Not Present 00:14:05.363 Atomic Write Unit (Normal): 1 00:14:05.363 Atomic Write Unit (PFail): 1 00:14:05.363 Atomic Compare & Write Unit: 1 00:14:05.363 Fused Compare & Write: Supported 00:14:05.363 Scatter-Gather List 00:14:05.363 SGL Command Set: Supported 00:14:05.363 SGL Keyed: Supported 00:14:05.363 SGL Bit Bucket Descriptor: Not Supported 00:14:05.363 SGL Metadata Pointer: Not Supported 00:14:05.363 Oversized SGL: Not Supported 00:14:05.363 SGL Metadata Address: Not Supported 00:14:05.363 SGL Offset: Supported 00:14:05.363 Transport SGL Data Block: Not Supported 00:14:05.363 Replay Protected Memory Block: Not Supported 00:14:05.363 00:14:05.363 Firmware Slot Information 00:14:05.363 ========================= 00:14:05.363 Active slot: 0 00:14:05.363 00:14:05.363 00:14:05.363 Error Log 00:14:05.363 ========= 00:14:05.363 00:14:05.363 Active Namespaces 00:14:05.363 ================= 00:14:05.364 Discovery Log Page 00:14:05.364 ================== 00:14:05.364 Generation Counter: 2 00:14:05.364 Number of Records: 2 00:14:05.364 Record Format: 0 00:14:05.364 00:14:05.364 Discovery Log Entry 0 00:14:05.364 ---------------------- 00:14:05.364 Transport Type: 3 (TCP) 00:14:05.364 Address Family: 1 (IPv4) 00:14:05.364 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:05.364 Entry Flags: 00:14:05.364 Duplicate Returned Information: 1 00:14:05.364 Explicit Persistent Connection Support for Discovery: 1 00:14:05.364 Transport Requirements: 00:14:05.364 Secure Channel: Not Required 00:14:05.364 Port ID: 0 (0x0000) 00:14:05.364 Controller ID: 65535 (0xffff) 00:14:05.364 Admin Max SQ Size: 128 00:14:05.364 Transport Service Identifier: 4420 00:14:05.364 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:05.364 Transport Address: 10.0.0.3 00:14:05.364 Discovery Log Entry 1 00:14:05.364 ---------------------- 00:14:05.364 Transport Type: 3 (TCP) 00:14:05.364 Address Family: 1 (IPv4) 00:14:05.364 Subsystem Type: 2 (NVM Subsystem) 00:14:05.364 Entry Flags: 00:14:05.364 Duplicate Returned Information: 0 00:14:05.364 Explicit Persistent Connection Support for Discovery: 0 00:14:05.364 Transport Requirements: 00:14:05.364 Secure Channel: Not Required 00:14:05.364 Port ID: 0 (0x0000) 00:14:05.364 Controller ID: 65535 (0xffff) 00:14:05.364 Admin Max SQ Size: 128 00:14:05.364 Transport Service Identifier: 4420 00:14:05.364 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:05.364 Transport Address: 10.0.0.3 [2024-12-09 10:56:58.461649] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:05.364 [2024-12-09 10:56:58.461657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9740) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.364 [2024-12-09 10:56:58.461665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef98c0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.364 [2024-12-09 10:56:58.461671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9a40) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.364 [2024-12-09 10:56:58.461677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.364 [2024-12-09 10:56:58.461686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.461697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.461708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.461765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.461769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.461772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.461792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.461805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.461859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.461864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.461866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461872] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:05.364 [2024-12-09 10:56:58.461875] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:05.364 [2024-12-09 10:56:58.461881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.461890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.461900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.461944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.461949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.461951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.461960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.461965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.461969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.461979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.462016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.462020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.462022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.462031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.462041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.462050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.462087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.462092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.462095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.462104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.462113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.462122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.462168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.462173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.462175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.462184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.462193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.462202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.462241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.462246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.462248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.462257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.364 [2024-12-09 10:56:58.462266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.364 [2024-12-09 10:56:58.462275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.364 [2024-12-09 10:56:58.462311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.364 [2024-12-09 10:56:58.462316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.364 [2024-12-09 10:56:58.462318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.364 [2024-12-09 10:56:58.462320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.364 [2024-12-09 10:56:58.462327] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462693] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462709] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462855] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462867] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.462930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.462932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.462941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.462946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.462950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.462960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.462999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.463003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.463005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.463008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.463014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.463017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.463019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.365 [2024-12-09 10:56:58.463024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.365 [2024-12-09 10:56:58.463033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.365 [2024-12-09 10:56:58.463072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.365 [2024-12-09 10:56:58.463077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.365 [2024-12-09 10:56:58.463079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.463081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.365 [2024-12-09 10:56:58.463088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.463091] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.365 [2024-12-09 10:56:58.463093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463297] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463331] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463536] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463605] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.463671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.463675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.463677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.463686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.463691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.463696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.463705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.467752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.467775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.467778] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.467780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.467789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.467791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.467794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e95750) 00:14:05.366 [2024-12-09 10:56:58.467799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.366 [2024-12-09 10:56:58.467814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ef9bc0, cid 3, qid 0 00:14:05.366 [2024-12-09 10:56:58.467865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.366 [2024-12-09 10:56:58.467870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.366 [2024-12-09 10:56:58.467872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.366 [2024-12-09 10:56:58.467875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ef9bc0) on tqpair=0x1e95750 00:14:05.366 [2024-12-09 10:56:58.467880] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:14:05.629 00:14:05.629 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:05.629 [2024-12-09 10:56:58.589392] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:05.629 [2024-12-09 10:56:58.589429] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74140 ] 00:14:05.629 [2024-12-09 10:56:58.734743] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:05.629 [2024-12-09 10:56:58.734808] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:05.629 [2024-12-09 10:56:58.734813] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:05.629 [2024-12-09 10:56:58.734825] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:05.629 [2024-12-09 10:56:58.734833] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:05.629 [2024-12-09 10:56:58.735030] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:05.629 [2024-12-09 10:56:58.735060] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1abe750 0 00:14:05.629 [2024-12-09 10:56:58.740780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:05.629 [2024-12-09 10:56:58.740791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:05.629 [2024-12-09 10:56:58.740794] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:05.629 [2024-12-09 10:56:58.740797] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:05.629 [2024-12-09 10:56:58.740820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.740824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.740826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.629 [2024-12-09 10:56:58.740835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:05.629 [2024-12-09 10:56:58.740856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.629 [2024-12-09 10:56:58.748772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.629 [2024-12-09 10:56:58.748785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.629 [2024-12-09 10:56:58.748788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.629 [2024-12-09 10:56:58.748797] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:05.629 [2024-12-09 10:56:58.748802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:05.629 [2024-12-09 10:56:58.748806] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:05.629 [2024-12-09 10:56:58.748818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.629 [2024-12-09 10:56:58.748829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.629 [2024-12-09 10:56:58.748846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.629 [2024-12-09 10:56:58.748885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.629 [2024-12-09 10:56:58.748890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.629 [2024-12-09 10:56:58.748892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.629 [2024-12-09 10:56:58.748899] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:05.629 [2024-12-09 10:56:58.748904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:05.629 [2024-12-09 10:56:58.748909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.629 [2024-12-09 10:56:58.748919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.629 [2024-12-09 10:56:58.748929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.629 [2024-12-09 10:56:58.748965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.629 [2024-12-09 10:56:58.748969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.629 [2024-12-09 10:56:58.748971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.629 [2024-12-09 10:56:58.748978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:05.629 [2024-12-09 10:56:58.748983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:05.629 [2024-12-09 10:56:58.748988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.629 [2024-12-09 10:56:58.748993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.748998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.630 [2024-12-09 10:56:58.749008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.630 [2024-12-09 10:56:58.749048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.630 [2024-12-09 10:56:58.749053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.630 [2024-12-09 10:56:58.749055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.630 [2024-12-09 10:56:58.749061] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:05.630 [2024-12-09 10:56:58.749067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.630 [2024-12-09 10:56:58.749087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.630 [2024-12-09 10:56:58.749119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.630 [2024-12-09 10:56:58.749123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.630 [2024-12-09 10:56:58.749125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.630 [2024-12-09 10:56:58.749131] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:05.630 [2024-12-09 10:56:58.749134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:05.630 [2024-12-09 10:56:58.749139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:05.630 [2024-12-09 10:56:58.749243] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:05.630 [2024-12-09 10:56:58.749246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:05.630 [2024-12-09 10:56:58.749252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.630 [2024-12-09 10:56:58.749272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.630 [2024-12-09 10:56:58.749318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.630 [2024-12-09 10:56:58.749323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.630 [2024-12-09 10:56:58.749325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.630 [2024-12-09 10:56:58.749330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:05.630 [2024-12-09 10:56:58.749337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.630 [2024-12-09 10:56:58.749357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.630 [2024-12-09 10:56:58.749391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.630 [2024-12-09 10:56:58.749395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.630 [2024-12-09 10:56:58.749397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.630 [2024-12-09 10:56:58.749403] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:05.630 [2024-12-09 10:56:58.749406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:05.630 [2024-12-09 10:56:58.749411] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:05.630 [2024-12-09 10:56:58.749417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:05.630 [2024-12-09 10:56:58.749424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.630 [2024-12-09 10:56:58.749441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.630 [2024-12-09 10:56:58.749519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.630 [2024-12-09 10:56:58.749524] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.630 [2024-12-09 10:56:58.749526] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749529] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=4096, cccid=0 00:14:05.630 [2024-12-09 10:56:58.749532] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b22740) on tqpair(0x1abe750): expected_datao=0, payload_size=4096 00:14:05.630 [2024-12-09 10:56:58.749535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749540] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749543] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.630 [2024-12-09 10:56:58.749553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.630 [2024-12-09 10:56:58.749555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.630 [2024-12-09 10:56:58.749564] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:05.630 [2024-12-09 10:56:58.749566] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:05.630 [2024-12-09 10:56:58.749569] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:05.630 [2024-12-09 10:56:58.749572] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:05.630 [2024-12-09 10:56:58.749578] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:05.630 [2024-12-09 10:56:58.749581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:05.630 [2024-12-09 10:56:58.749586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:05.630 [2024-12-09 10:56:58.749591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749593] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.630 [2024-12-09 10:56:58.749611] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.630 [2024-12-09 10:56:58.749654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.630 [2024-12-09 10:56:58.749658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.630 [2024-12-09 10:56:58.749660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.630 [2024-12-09 10:56:58.749670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.630 [2024-12-09 10:56:58.749683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.630 [2024-12-09 10:56:58.749695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.630 [2024-12-09 10:56:58.749708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.630 [2024-12-09 10:56:58.749719] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:05.630 [2024-12-09 10:56:58.749724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:05.630 [2024-12-09 10:56:58.749729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.630 [2024-12-09 10:56:58.749731] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.630 [2024-12-09 10:56:58.749736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.630 [2024-12-09 10:56:58.749759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22740, cid 0, qid 0 00:14:05.631 [2024-12-09 10:56:58.749763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b228c0, cid 1, qid 0 00:14:05.631 [2024-12-09 10:56:58.749767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22a40, cid 2, qid 0 00:14:05.631 [2024-12-09 10:56:58.749770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.631 [2024-12-09 10:56:58.749773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.631 [2024-12-09 10:56:58.749850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.749855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.631 [2024-12-09 10:56:58.749857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.749859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.631 [2024-12-09 10:56:58.749866] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:05.631 [2024-12-09 10:56:58.749869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.749875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.749879] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.749884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.749886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.749888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.631 [2024-12-09 10:56:58.749893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:05.631 [2024-12-09 10:56:58.749904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.631 [2024-12-09 10:56:58.749948] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.749952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.631 [2024-12-09 10:56:58.749954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.749957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.631 [2024-12-09 10:56:58.750007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.631 [2024-12-09 10:56:58.750027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.631 [2024-12-09 10:56:58.750037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.631 [2024-12-09 10:56:58.750081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.631 [2024-12-09 10:56:58.750086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.631 [2024-12-09 10:56:58.750088] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=4096, cccid=4 00:14:05.631 [2024-12-09 10:56:58.750093] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b22d40) on tqpair(0x1abe750): expected_datao=0, payload_size=4096 00:14:05.631 [2024-12-09 10:56:58.750095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750103] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.750113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.631 [2024-12-09 10:56:58.750115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.631 [2024-12-09 10:56:58.750128] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:05.631 [2024-12-09 10:56:58.750134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.631 [2024-12-09 10:56:58.750153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.631 [2024-12-09 10:56:58.750163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.631 [2024-12-09 10:56:58.750248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.631 [2024-12-09 10:56:58.750252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.631 [2024-12-09 10:56:58.750254] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750257] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=4096, cccid=4 00:14:05.631 [2024-12-09 10:56:58.750259] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b22d40) on tqpair(0x1abe750): expected_datao=0, payload_size=4096 00:14:05.631 [2024-12-09 10:56:58.750262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750267] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750269] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.750279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.631 [2024-12-09 10:56:58.750281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.631 [2024-12-09 10:56:58.750295] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.631 [2024-12-09 10:56:58.750313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.631 [2024-12-09 10:56:58.750323] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.631 [2024-12-09 10:56:58.750379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.631 [2024-12-09 10:56:58.750383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.631 [2024-12-09 10:56:58.750385] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750388] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=4096, cccid=4 00:14:05.631 [2024-12-09 10:56:58.750391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b22d40) on tqpair(0x1abe750): expected_datao=0, payload_size=4096 00:14:05.631 [2024-12-09 10:56:58.750394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750398] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750401] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.750410] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.631 [2024-12-09 10:56:58.750412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.631 [2024-12-09 10:56:58.750420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750447] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:05.631 [2024-12-09 10:56:58.750450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:05.631 [2024-12-09 10:56:58.750453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:05.631 [2024-12-09 10:56:58.750464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.631 [2024-12-09 10:56:58.750471] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.631 [2024-12-09 10:56:58.750476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1abe750) 00:14:05.631 [2024-12-09 10:56:58.750484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:05.631 [2024-12-09 10:56:58.750498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.631 [2024-12-09 10:56:58.750502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22ec0, cid 5, qid 0 00:14:05.631 [2024-12-09 10:56:58.750561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.750566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.631 [2024-12-09 10:56:58.750568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.631 [2024-12-09 10:56:58.750570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.631 [2024-12-09 10:56:58.750575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.631 [2024-12-09 10:56:58.750579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.750581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22ec0) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.750590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22ec0, cid 5, qid 0 00:14:05.632 [2024-12-09 10:56:58.750641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.750646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.750648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22ec0) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.750656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22ec0, cid 5, qid 0 00:14:05.632 [2024-12-09 10:56:58.750712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.750717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.750719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22ec0) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.750727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22ec0, cid 5, qid 0 00:14:05.632 [2024-12-09 10:56:58.750790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.750795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.750798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22ec0) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.750811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.750852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1abe750) 00:14:05.632 [2024-12-09 10:56:58.750856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.632 [2024-12-09 10:56:58.750869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22ec0, cid 5, qid 0 00:14:05.632 [2024-12-09 10:56:58.750873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22d40, cid 4, qid 0 00:14:05.632 [2024-12-09 10:56:58.750876] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b23040, cid 6, qid 0 00:14:05.632 [2024-12-09 10:56:58.750879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b231c0, cid 7, qid 0 00:14:05.632 [2024-12-09 10:56:58.750996] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.632 [2024-12-09 10:56:58.751001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.632 [2024-12-09 10:56:58.751003] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751005] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=8192, cccid=5 00:14:05.632 [2024-12-09 10:56:58.751008] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b22ec0) on tqpair(0x1abe750): expected_datao=0, payload_size=8192 00:14:05.632 [2024-12-09 10:56:58.751010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751022] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751025] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.632 [2024-12-09 10:56:58.751032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.632 [2024-12-09 10:56:58.751035] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=512, cccid=4 00:14:05.632 [2024-12-09 10:56:58.751039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b22d40) on tqpair(0x1abe750): expected_datao=0, payload_size=512 00:14:05.632 [2024-12-09 10:56:58.751042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751046] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751048] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.632 [2024-12-09 10:56:58.751056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.632 [2024-12-09 10:56:58.751059] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751061] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=512, cccid=6 00:14:05.632 [2024-12-09 10:56:58.751064] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b23040) on tqpair(0x1abe750): expected_datao=0, payload_size=512 00:14:05.632 [2024-12-09 10:56:58.751066] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751070] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751073] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:05.632 [2024-12-09 10:56:58.751080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:05.632 [2024-12-09 10:56:58.751082] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751085] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1abe750): datao=0, datal=4096, cccid=7 00:14:05.632 [2024-12-09 10:56:58.751087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b231c0) on tqpair(0x1abe750): expected_datao=0, payload_size=4096 00:14:05.632 [2024-12-09 10:56:58.751090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751095] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751097] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.751106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.751108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22ec0) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.751121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.751125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.751127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22d40) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.751138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.751143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.751145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b23040) on tqpair=0x1abe750 00:14:05.632 [2024-12-09 10:56:58.751152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.632 [2024-12-09 10:56:58.751156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.632 [2024-12-09 10:56:58.751158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.632 [2024-12-09 10:56:58.751161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b231c0) on tqpair=0x1abe750 00:14:05.632 ===================================================== 00:14:05.632 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:05.632 ===================================================== 00:14:05.632 Controller Capabilities/Features 00:14:05.632 ================================ 00:14:05.632 Vendor ID: 8086 00:14:05.632 Subsystem Vendor ID: 8086 00:14:05.632 Serial Number: SPDK00000000000001 00:14:05.632 Model Number: SPDK bdev Controller 00:14:05.632 Firmware Version: 25.01 00:14:05.632 Recommended Arb Burst: 6 00:14:05.632 IEEE OUI Identifier: e4 d2 5c 00:14:05.632 Multi-path I/O 00:14:05.632 May have multiple subsystem ports: Yes 00:14:05.632 May have multiple controllers: Yes 00:14:05.632 Associated with SR-IOV VF: No 00:14:05.632 Max Data Transfer Size: 131072 00:14:05.632 Max Number of Namespaces: 32 00:14:05.632 Max Number of I/O Queues: 127 00:14:05.632 NVMe Specification Version (VS): 1.3 00:14:05.632 NVMe Specification Version (Identify): 1.3 00:14:05.632 Maximum Queue Entries: 128 00:14:05.632 Contiguous Queues Required: Yes 00:14:05.632 Arbitration Mechanisms Supported 00:14:05.632 Weighted Round Robin: Not Supported 00:14:05.632 Vendor Specific: Not Supported 00:14:05.632 Reset Timeout: 15000 ms 00:14:05.632 Doorbell Stride: 4 bytes 00:14:05.632 NVM Subsystem Reset: Not Supported 00:14:05.632 Command Sets Supported 00:14:05.632 NVM Command Set: Supported 00:14:05.633 Boot Partition: Not Supported 00:14:05.633 Memory Page Size Minimum: 4096 bytes 00:14:05.633 Memory Page Size Maximum: 4096 bytes 00:14:05.633 Persistent Memory Region: Not Supported 00:14:05.633 Optional Asynchronous Events Supported 00:14:05.633 Namespace Attribute Notices: Supported 00:14:05.633 Firmware Activation Notices: Not Supported 00:14:05.633 ANA Change Notices: Not Supported 00:14:05.633 PLE Aggregate Log Change Notices: Not Supported 00:14:05.633 LBA Status Info Alert Notices: Not Supported 00:14:05.633 EGE Aggregate Log Change Notices: Not Supported 00:14:05.633 Normal NVM Subsystem Shutdown event: Not Supported 00:14:05.633 Zone Descriptor Change Notices: Not Supported 00:14:05.633 Discovery Log Change Notices: Not Supported 00:14:05.633 Controller Attributes 00:14:05.633 128-bit Host Identifier: Supported 00:14:05.633 Non-Operational Permissive Mode: Not Supported 00:14:05.633 NVM Sets: Not Supported 00:14:05.633 Read Recovery Levels: Not Supported 00:14:05.633 Endurance Groups: Not Supported 00:14:05.633 Predictable Latency Mode: Not Supported 00:14:05.633 Traffic Based Keep ALive: Not Supported 00:14:05.633 Namespace Granularity: Not Supported 00:14:05.633 SQ Associations: Not Supported 00:14:05.633 UUID List: Not Supported 00:14:05.633 Multi-Domain Subsystem: Not Supported 00:14:05.633 Fixed Capacity Management: Not Supported 00:14:05.633 Variable Capacity Management: Not Supported 00:14:05.633 Delete Endurance Group: Not Supported 00:14:05.633 Delete NVM Set: Not Supported 00:14:05.633 Extended LBA Formats Supported: Not Supported 00:14:05.633 Flexible Data Placement Supported: Not Supported 00:14:05.633 00:14:05.633 Controller Memory Buffer Support 00:14:05.633 ================================ 00:14:05.633 Supported: No 00:14:05.633 00:14:05.633 Persistent Memory Region Support 00:14:05.633 ================================ 00:14:05.633 Supported: No 00:14:05.633 00:14:05.633 Admin Command Set Attributes 00:14:05.633 ============================ 00:14:05.633 Security Send/Receive: Not Supported 00:14:05.633 Format NVM: Not Supported 00:14:05.633 Firmware Activate/Download: Not Supported 00:14:05.633 Namespace Management: Not Supported 00:14:05.633 Device Self-Test: Not Supported 00:14:05.633 Directives: Not Supported 00:14:05.633 NVMe-MI: Not Supported 00:14:05.633 Virtualization Management: Not Supported 00:14:05.633 Doorbell Buffer Config: Not Supported 00:14:05.633 Get LBA Status Capability: Not Supported 00:14:05.633 Command & Feature Lockdown Capability: Not Supported 00:14:05.633 Abort Command Limit: 4 00:14:05.633 Async Event Request Limit: 4 00:14:05.633 Number of Firmware Slots: N/A 00:14:05.633 Firmware Slot 1 Read-Only: N/A 00:14:05.633 Firmware Activation Without Reset: N/A 00:14:05.633 Multiple Update Detection Support: N/A 00:14:05.633 Firmware Update Granularity: No Information Provided 00:14:05.633 Per-Namespace SMART Log: No 00:14:05.633 Asymmetric Namespace Access Log Page: Not Supported 00:14:05.633 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:05.633 Command Effects Log Page: Supported 00:14:05.633 Get Log Page Extended Data: Supported 00:14:05.633 Telemetry Log Pages: Not Supported 00:14:05.633 Persistent Event Log Pages: Not Supported 00:14:05.633 Supported Log Pages Log Page: May Support 00:14:05.633 Commands Supported & Effects Log Page: Not Supported 00:14:05.633 Feature Identifiers & Effects Log Page:May Support 00:14:05.633 NVMe-MI Commands & Effects Log Page: May Support 00:14:05.633 Data Area 4 for Telemetry Log: Not Supported 00:14:05.633 Error Log Page Entries Supported: 128 00:14:05.633 Keep Alive: Supported 00:14:05.633 Keep Alive Granularity: 10000 ms 00:14:05.633 00:14:05.633 NVM Command Set Attributes 00:14:05.633 ========================== 00:14:05.633 Submission Queue Entry Size 00:14:05.633 Max: 64 00:14:05.633 Min: 64 00:14:05.633 Completion Queue Entry Size 00:14:05.633 Max: 16 00:14:05.633 Min: 16 00:14:05.633 Number of Namespaces: 32 00:14:05.633 Compare Command: Supported 00:14:05.633 Write Uncorrectable Command: Not Supported 00:14:05.633 Dataset Management Command: Supported 00:14:05.633 Write Zeroes Command: Supported 00:14:05.633 Set Features Save Field: Not Supported 00:14:05.633 Reservations: Supported 00:14:05.633 Timestamp: Not Supported 00:14:05.633 Copy: Supported 00:14:05.633 Volatile Write Cache: Present 00:14:05.633 Atomic Write Unit (Normal): 1 00:14:05.633 Atomic Write Unit (PFail): 1 00:14:05.633 Atomic Compare & Write Unit: 1 00:14:05.633 Fused Compare & Write: Supported 00:14:05.633 Scatter-Gather List 00:14:05.633 SGL Command Set: Supported 00:14:05.633 SGL Keyed: Supported 00:14:05.633 SGL Bit Bucket Descriptor: Not Supported 00:14:05.633 SGL Metadata Pointer: Not Supported 00:14:05.633 Oversized SGL: Not Supported 00:14:05.633 SGL Metadata Address: Not Supported 00:14:05.633 SGL Offset: Supported 00:14:05.633 Transport SGL Data Block: Not Supported 00:14:05.633 Replay Protected Memory Block: Not Supported 00:14:05.633 00:14:05.633 Firmware Slot Information 00:14:05.633 ========================= 00:14:05.633 Active slot: 1 00:14:05.633 Slot 1 Firmware Revision: 25.01 00:14:05.633 00:14:05.633 00:14:05.633 Commands Supported and Effects 00:14:05.633 ============================== 00:14:05.633 Admin Commands 00:14:05.633 -------------- 00:14:05.633 Get Log Page (02h): Supported 00:14:05.633 Identify (06h): Supported 00:14:05.633 Abort (08h): Supported 00:14:05.633 Set Features (09h): Supported 00:14:05.633 Get Features (0Ah): Supported 00:14:05.633 Asynchronous Event Request (0Ch): Supported 00:14:05.633 Keep Alive (18h): Supported 00:14:05.633 I/O Commands 00:14:05.633 ------------ 00:14:05.633 Flush (00h): Supported LBA-Change 00:14:05.633 Write (01h): Supported LBA-Change 00:14:05.633 Read (02h): Supported 00:14:05.633 Compare (05h): Supported 00:14:05.633 Write Zeroes (08h): Supported LBA-Change 00:14:05.633 Dataset Management (09h): Supported LBA-Change 00:14:05.633 Copy (19h): Supported LBA-Change 00:14:05.633 00:14:05.633 Error Log 00:14:05.633 ========= 00:14:05.633 00:14:05.633 Arbitration 00:14:05.633 =========== 00:14:05.633 Arbitration Burst: 1 00:14:05.633 00:14:05.633 Power Management 00:14:05.633 ================ 00:14:05.633 Number of Power States: 1 00:14:05.633 Current Power State: Power State #0 00:14:05.633 Power State #0: 00:14:05.633 Max Power: 0.00 W 00:14:05.633 Non-Operational State: Operational 00:14:05.633 Entry Latency: Not Reported 00:14:05.633 Exit Latency: Not Reported 00:14:05.633 Relative Read Throughput: 0 00:14:05.633 Relative Read Latency: 0 00:14:05.633 Relative Write Throughput: 0 00:14:05.633 Relative Write Latency: 0 00:14:05.633 Idle Power: Not Reported 00:14:05.633 Active Power: Not Reported 00:14:05.633 Non-Operational Permissive Mode: Not Supported 00:14:05.633 00:14:05.633 Health Information 00:14:05.633 ================== 00:14:05.633 Critical Warnings: 00:14:05.633 Available Spare Space: OK 00:14:05.633 Temperature: OK 00:14:05.633 Device Reliability: OK 00:14:05.633 Read Only: No 00:14:05.633 Volatile Memory Backup: OK 00:14:05.633 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:05.633 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:05.633 Available Spare: 0% 00:14:05.633 Available Spare Threshold: 0% 00:14:05.633 Life Percentage Used:[2024-12-09 10:56:58.751244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.633 [2024-12-09 10:56:58.751247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1abe750) 00:14:05.633 [2024-12-09 10:56:58.751252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.633 [2024-12-09 10:56:58.751264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b231c0, cid 7, qid 0 00:14:05.633 [2024-12-09 10:56:58.751296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.633 [2024-12-09 10:56:58.751301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.633 [2024-12-09 10:56:58.751303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.633 [2024-12-09 10:56:58.751305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b231c0) on tqpair=0x1abe750 00:14:05.633 [2024-12-09 10:56:58.751354] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:05.633 [2024-12-09 10:56:58.751363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22740) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.634 [2024-12-09 10:56:58.751373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b228c0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.634 [2024-12-09 10:56:58.751379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22a40) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.634 [2024-12-09 10:56:58.751385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:05.634 [2024-12-09 10:56:58.751395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751566] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751572] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:05.634 [2024-12-09 10:56:58.751575] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:05.634 [2024-12-09 10:56:58.751581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751642] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751644] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751722] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751876] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.751946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.751951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.751953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.751962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.751967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.751972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.751981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.752019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.752023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.752025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.752028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.752034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.752037] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.752039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.752044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.752053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.752093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.752098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.752100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.752102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.634 [2024-12-09 10:56:58.752109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.752111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.634 [2024-12-09 10:56:58.752114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.634 [2024-12-09 10:56:58.752118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.634 [2024-12-09 10:56:58.752128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.634 [2024-12-09 10:56:58.752168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.634 [2024-12-09 10:56:58.752172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.634 [2024-12-09 10:56:58.752174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752242] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752327] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.752677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.752682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.752687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.752697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.752736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.752740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.752743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.756761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.756771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.756774] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.756776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1abe750) 00:14:05.635 [2024-12-09 10:56:58.756781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:05.635 [2024-12-09 10:56:58.756796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b22bc0, cid 3, qid 0 00:14:05.635 [2024-12-09 10:56:58.756833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:05.635 [2024-12-09 10:56:58.756838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:05.635 [2024-12-09 10:56:58.756841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:05.635 [2024-12-09 10:56:58.756843] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b22bc0) on tqpair=0x1abe750 00:14:05.635 [2024-12-09 10:56:58.756848] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:14:05.895 0% 00:14:05.895 Data Units Read: 0 00:14:05.895 Data Units Written: 0 00:14:05.895 Host Read Commands: 0 00:14:05.895 Host Write Commands: 0 00:14:05.895 Controller Busy Time: 0 minutes 00:14:05.895 Power Cycles: 0 00:14:05.895 Power On Hours: 0 hours 00:14:05.895 Unsafe Shutdowns: 0 00:14:05.895 Unrecoverable Media Errors: 0 00:14:05.895 Lifetime Error Log Entries: 0 00:14:05.895 Warning Temperature Time: 0 minutes 00:14:05.895 Critical Temperature Time: 0 minutes 00:14:05.895 00:14:05.895 Number of Queues 00:14:05.895 ================ 00:14:05.895 Number of I/O Submission Queues: 127 00:14:05.895 Number of I/O Completion Queues: 127 00:14:05.895 00:14:05.895 Active Namespaces 00:14:05.895 ================= 00:14:05.895 Namespace ID:1 00:14:05.895 Error Recovery Timeout: Unlimited 00:14:05.895 Command Set Identifier: NVM (00h) 00:14:05.895 Deallocate: Supported 00:14:05.895 Deallocated/Unwritten Error: Not Supported 00:14:05.895 Deallocated Read Value: Unknown 00:14:05.895 Deallocate in Write Zeroes: Not Supported 00:14:05.895 Deallocated Guard Field: 0xFFFF 00:14:05.895 Flush: Supported 00:14:05.895 Reservation: Supported 00:14:05.895 Namespace Sharing Capabilities: Multiple Controllers 00:14:05.895 Size (in LBAs): 131072 (0GiB) 00:14:05.895 Capacity (in LBAs): 131072 (0GiB) 00:14:05.895 Utilization (in LBAs): 131072 (0GiB) 00:14:05.895 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:05.895 EUI64: ABCDEF0123456789 00:14:05.895 UUID: eca1137d-3c11-486d-a19b-7e38b150cd65 00:14:05.895 Thin Provisioning: Not Supported 00:14:05.895 Per-NS Atomic Units: Yes 00:14:05.895 Atomic Boundary Size (Normal): 0 00:14:05.895 Atomic Boundary Size (PFail): 0 00:14:05.895 Atomic Boundary Offset: 0 00:14:05.895 Maximum Single Source Range Length: 65535 00:14:05.895 Maximum Copy Length: 65535 00:14:05.895 Maximum Source Range Count: 1 00:14:05.895 NGUID/EUI64 Never Reused: No 00:14:05.895 Namespace Write Protected: No 00:14:05.895 Number of LBA Formats: 1 00:14:05.895 Current LBA Format: LBA Format #00 00:14:05.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:05.895 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.895 rmmod nvme_tcp 00:14:05.895 rmmod nvme_fabrics 00:14:05.895 rmmod nvme_keyring 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74102 ']' 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74102 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74102 ']' 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74102 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:14:05.895 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.896 10:56:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74102 00:14:05.896 killing process with pid 74102 00:14:05.896 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.896 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.896 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74102' 00:14:05.896 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74102 00:14:05.896 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74102 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:06.155 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:06.415 ************************************ 00:14:06.415 END TEST nvmf_identify 00:14:06.415 ************************************ 00:14:06.415 00:14:06.415 real 0m3.055s 00:14:06.415 user 0m7.326s 00:14:06.415 sys 0m0.851s 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:06.415 ************************************ 00:14:06.415 START TEST nvmf_perf 00:14:06.415 ************************************ 00:14:06.415 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:06.675 * Looking for test storage... 00:14:06.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.675 --rc genhtml_branch_coverage=1 00:14:06.675 --rc genhtml_function_coverage=1 00:14:06.675 --rc genhtml_legend=1 00:14:06.675 --rc geninfo_all_blocks=1 00:14:06.675 --rc geninfo_unexecuted_blocks=1 00:14:06.675 00:14:06.675 ' 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.675 --rc genhtml_branch_coverage=1 00:14:06.675 --rc genhtml_function_coverage=1 00:14:06.675 --rc genhtml_legend=1 00:14:06.675 --rc geninfo_all_blocks=1 00:14:06.675 --rc geninfo_unexecuted_blocks=1 00:14:06.675 00:14:06.675 ' 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.675 --rc genhtml_branch_coverage=1 00:14:06.675 --rc genhtml_function_coverage=1 00:14:06.675 --rc genhtml_legend=1 00:14:06.675 --rc geninfo_all_blocks=1 00:14:06.675 --rc geninfo_unexecuted_blocks=1 00:14:06.675 00:14:06.675 ' 00:14:06.675 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:06.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.675 --rc genhtml_branch_coverage=1 00:14:06.675 --rc genhtml_function_coverage=1 00:14:06.676 --rc genhtml_legend=1 00:14:06.676 --rc geninfo_all_blocks=1 00:14:06.676 --rc geninfo_unexecuted_blocks=1 00:14:06.676 00:14:06.676 ' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.676 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:06.676 Cannot find device "nvmf_init_br" 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:06.676 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:06.935 Cannot find device "nvmf_init_br2" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:06.935 Cannot find device "nvmf_tgt_br" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.935 Cannot find device "nvmf_tgt_br2" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:06.935 Cannot find device "nvmf_init_br" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:06.935 Cannot find device "nvmf_init_br2" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:06.935 Cannot find device "nvmf_tgt_br" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:06.935 Cannot find device "nvmf_tgt_br2" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:06.935 Cannot find device "nvmf_br" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:06.935 Cannot find device "nvmf_init_if" 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:06.935 10:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:06.935 Cannot find device "nvmf_init_if2" 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.935 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:07.195 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:07.195 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:07.195 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:07.195 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:07.195 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:07.195 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:07.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:07.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:07.196 00:14:07.196 --- 10.0.0.3 ping statistics --- 00:14:07.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.196 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:07.196 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:07.196 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.096 ms 00:14:07.196 00:14:07.196 --- 10.0.0.4 ping statistics --- 00:14:07.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.196 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:07.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:07.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:14:07.196 00:14:07.196 --- 10.0.0.1 ping statistics --- 00:14:07.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.196 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:07.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:07.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:14:07.196 00:14:07.196 --- 10.0.0.2 ping statistics --- 00:14:07.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:07.196 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74359 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74359 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:07.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74359 ']' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:07.196 10:57:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:07.196 [2024-12-09 10:57:00.366809] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:07.196 [2024-12-09 10:57:00.366869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.456 [2024-12-09 10:57:00.518139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:07.456 [2024-12-09 10:57:00.568242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:07.456 [2024-12-09 10:57:00.568365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:07.456 [2024-12-09 10:57:00.568400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:07.456 [2024-12-09 10:57:00.568427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:07.456 [2024-12-09 10:57:00.568444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:07.456 [2024-12-09 10:57:00.569359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.456 [2024-12-09 10:57:00.569438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.456 [2024-12-09 10:57:00.569520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.456 [2024-12-09 10:57:00.569525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:07.456 [2024-12-09 10:57:00.612139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:08.392 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:08.652 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:08.652 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:08.929 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:08.929 10:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.929 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:08.929 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:08.929 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:08.929 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:08.929 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:09.200 [2024-12-09 10:57:02.254311] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.200 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:09.459 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:09.459 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:09.718 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:09.718 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:09.718 10:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:09.974 [2024-12-09 10:57:03.053839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:09.974 10:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:10.231 10:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:10.231 10:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:10.231 10:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:10.231 10:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:11.607 Initializing NVMe Controllers 00:14:11.607 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:11.607 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:11.607 Initialization complete. Launching workers. 00:14:11.607 ======================================================== 00:14:11.607 Latency(us) 00:14:11.607 Device Information : IOPS MiB/s Average min max 00:14:11.607 PCIE (0000:00:10.0) NSID 1 from core 0: 22568.30 88.16 1418.80 288.46 7836.50 00:14:11.607 ======================================================== 00:14:11.607 Total : 22568.30 88.16 1418.80 288.46 7836.50 00:14:11.607 00:14:11.607 10:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:12.542 Initializing NVMe Controllers 00:14:12.542 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.542 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:12.542 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:12.542 Initialization complete. Launching workers. 00:14:12.542 ======================================================== 00:14:12.542 Latency(us) 00:14:12.542 Device Information : IOPS MiB/s Average min max 00:14:12.542 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5318.44 20.78 187.80 71.80 5148.57 00:14:12.542 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.64 0.49 8087.04 5997.85 12055.53 00:14:12.542 ======================================================== 00:14:12.542 Total : 5443.08 21.26 368.67 71.80 12055.53 00:14:12.542 00:14:12.800 10:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:14.174 Initializing NVMe Controllers 00:14:14.174 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:14.174 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:14.174 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:14.174 Initialization complete. Launching workers. 00:14:14.174 ======================================================== 00:14:14.174 Latency(us) 00:14:14.174 Device Information : IOPS MiB/s Average min max 00:14:14.174 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11455.00 44.75 2794.53 446.48 6434.91 00:14:14.174 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4000.00 15.62 8037.29 4349.71 12602.78 00:14:14.174 ======================================================== 00:14:14.174 Total : 15455.00 60.37 4151.44 446.48 12602.78 00:14:14.174 00:14:14.174 10:57:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:14.174 10:57:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:16.706 Initializing NVMe Controllers 00:14:16.706 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.706 Controller IO queue size 128, less than required. 00:14:16.706 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.706 Controller IO queue size 128, less than required. 00:14:16.706 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:16.706 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:16.706 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:16.706 Initialization complete. Launching workers. 00:14:16.706 ======================================================== 00:14:16.706 Latency(us) 00:14:16.706 Device Information : IOPS MiB/s Average min max 00:14:16.706 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1681.15 420.29 77040.65 31491.80 168770.63 00:14:16.706 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 629.18 157.30 212697.29 53800.81 363171.73 00:14:16.706 ======================================================== 00:14:16.706 Total : 2310.33 577.58 113984.56 31491.80 363171.73 00:14:16.706 00:14:16.965 10:57:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:17.224 Initializing NVMe Controllers 00:14:17.224 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.224 Controller IO queue size 128, less than required. 00:14:17.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.225 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:17.225 Controller IO queue size 128, less than required. 00:14:17.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.225 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:17.225 WARNING: Some requested NVMe devices were skipped 00:14:17.225 No valid NVMe controllers or AIO or URING devices found 00:14:17.225 10:57:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:19.758 Initializing NVMe Controllers 00:14:19.758 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.758 Controller IO queue size 128, less than required. 00:14:19.758 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:19.758 Controller IO queue size 128, less than required. 00:14:19.758 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:19.758 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:19.758 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:19.758 Initialization complete. Launching workers. 00:14:19.758 00:14:19.758 ==================== 00:14:19.758 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:19.758 TCP transport: 00:14:19.758 polls: 17164 00:14:19.758 idle_polls: 11871 00:14:19.758 sock_completions: 5293 00:14:19.758 nvme_completions: 8405 00:14:19.758 submitted_requests: 12556 00:14:19.758 queued_requests: 1 00:14:19.758 00:14:19.758 ==================== 00:14:19.758 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:19.758 TCP transport: 00:14:19.758 polls: 21346 00:14:19.758 idle_polls: 15460 00:14:19.758 sock_completions: 5886 00:14:19.758 nvme_completions: 7869 00:14:19.758 submitted_requests: 11752 00:14:19.758 queued_requests: 1 00:14:19.758 ======================================================== 00:14:19.758 Latency(us) 00:14:19.758 Device Information : IOPS MiB/s Average min max 00:14:19.758 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2098.73 524.68 62080.78 30093.69 100048.69 00:14:19.758 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1964.88 491.22 65330.68 24728.11 115191.52 00:14:19.758 ======================================================== 00:14:19.758 Total : 4063.61 1015.90 63652.21 24728.11 115191.52 00:14:19.758 00:14:19.758 10:57:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:19.758 10:57:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.017 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.017 rmmod nvme_tcp 00:14:20.017 rmmod nvme_fabrics 00:14:20.017 rmmod nvme_keyring 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74359 ']' 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74359 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74359 ']' 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74359 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74359 00:14:20.276 killing process with pid 74359 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74359' 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74359 00:14:20.276 10:57:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74359 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:22.183 10:57:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:22.183 00:14:22.183 real 0m15.578s 00:14:22.183 user 0m56.144s 00:14:22.183 sys 0m3.890s 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:22.183 ************************************ 00:14:22.183 END TEST nvmf_perf 00:14:22.183 ************************************ 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.183 ************************************ 00:14:22.183 START TEST nvmf_fio_host 00:14:22.183 ************************************ 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:22.183 * Looking for test storage... 00:14:22.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:14:22.183 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:22.443 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:22.443 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.443 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:22.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.444 --rc genhtml_branch_coverage=1 00:14:22.444 --rc genhtml_function_coverage=1 00:14:22.444 --rc genhtml_legend=1 00:14:22.444 --rc geninfo_all_blocks=1 00:14:22.444 --rc geninfo_unexecuted_blocks=1 00:14:22.444 00:14:22.444 ' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:22.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.444 --rc genhtml_branch_coverage=1 00:14:22.444 --rc genhtml_function_coverage=1 00:14:22.444 --rc genhtml_legend=1 00:14:22.444 --rc geninfo_all_blocks=1 00:14:22.444 --rc geninfo_unexecuted_blocks=1 00:14:22.444 00:14:22.444 ' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:22.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.444 --rc genhtml_branch_coverage=1 00:14:22.444 --rc genhtml_function_coverage=1 00:14:22.444 --rc genhtml_legend=1 00:14:22.444 --rc geninfo_all_blocks=1 00:14:22.444 --rc geninfo_unexecuted_blocks=1 00:14:22.444 00:14:22.444 ' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:22.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.444 --rc genhtml_branch_coverage=1 00:14:22.444 --rc genhtml_function_coverage=1 00:14:22.444 --rc genhtml_legend=1 00:14:22.444 --rc geninfo_all_blocks=1 00:14:22.444 --rc geninfo_unexecuted_blocks=1 00:14:22.444 00:14:22.444 ' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.444 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.445 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:22.445 Cannot find device "nvmf_init_br" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:22.445 Cannot find device "nvmf_init_br2" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:22.445 Cannot find device "nvmf_tgt_br" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.445 Cannot find device "nvmf_tgt_br2" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:22.445 Cannot find device "nvmf_init_br" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:22.445 Cannot find device "nvmf_init_br2" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:22.445 Cannot find device "nvmf_tgt_br" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:22.445 Cannot find device "nvmf_tgt_br2" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:22.445 Cannot find device "nvmf_br" 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:22.445 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:22.705 Cannot find device "nvmf_init_if" 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:22.705 Cannot find device "nvmf_init_if2" 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:22.705 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.705 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:14:22.705 00:14:22.705 --- 10.0.0.3 ping statistics --- 00:14:22.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.705 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:22.705 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:22.705 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:14:22.705 00:14:22.705 --- 10.0.0.4 ping statistics --- 00:14:22.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.705 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:22.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:22.705 00:14:22.705 --- 10.0.0.1 ping statistics --- 00:14:22.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.705 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:22.705 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:22.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:22.965 00:14:22.965 --- 10.0.0.2 ping statistics --- 00:14:22.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.965 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74834 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74834 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 74834 ']' 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.965 10:57:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:22.965 [2024-12-09 10:57:15.989919] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:22.965 [2024-12-09 10:57:15.989972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.965 [2024-12-09 10:57:16.142225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.224 [2024-12-09 10:57:16.188716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.224 [2024-12-09 10:57:16.188772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.224 [2024-12-09 10:57:16.188778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.224 [2024-12-09 10:57:16.188783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.224 [2024-12-09 10:57:16.188787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.224 [2024-12-09 10:57:16.189662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.224 [2024-12-09 10:57:16.189860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.224 [2024-12-09 10:57:16.189866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.224 [2024-12-09 10:57:16.189795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.224 [2024-12-09 10:57:16.231190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.793 10:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.793 10:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:14:23.793 10:57:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:24.052 [2024-12-09 10:57:17.025556] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.052 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:24.052 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:24.052 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:24.052 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:24.311 Malloc1 00:14:24.311 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:24.570 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:24.570 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:24.829 [2024-12-09 10:57:17.863122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:24.829 10:57:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:25.089 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:25.090 10:57:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:25.348 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:25.348 fio-3.35 00:14:25.348 Starting 1 thread 00:14:27.884 00:14:27.884 test: (groupid=0, jobs=1): err= 0: pid=74916: Mon Dec 9 10:57:20 2024 00:14:27.884 read: IOPS=11.2k, BW=43.7MiB/s (45.9MB/s)(87.8MiB/2006msec) 00:14:27.884 slat (nsec): min=1470, max=416004, avg=1690.63, stdev=3555.29 00:14:27.884 clat (usec): min=3187, max=11523, avg=5985.38, stdev=592.69 00:14:27.884 lat (usec): min=3190, max=11525, avg=5987.08, stdev=592.89 00:14:27.884 clat percentiles (usec): 00:14:27.884 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:14:27.884 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:14:27.884 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6718], 95.00th=[ 6915], 00:14:27.884 | 99.00th=[ 7373], 99.50th=[ 8094], 99.90th=[10290], 99.95th=[10814], 00:14:27.884 | 99.99th=[11469] 00:14:27.884 bw ( KiB/s): min=41608, max=47208, per=99.96%, avg=44776.00, stdev=2437.71, samples=4 00:14:27.884 iops : min=10402, max=11802, avg=11194.00, stdev=609.43, samples=4 00:14:27.884 write: IOPS=11.1k, BW=43.5MiB/s (45.7MB/s)(87.4MiB/2006msec); 0 zone resets 00:14:27.884 slat (nsec): min=1506, max=320186, avg=1738.27, stdev=2408.46 00:14:27.884 clat (usec): min=2868, max=11334, avg=5420.41, stdev=531.28 00:14:27.884 lat (usec): min=2871, max=11335, avg=5422.15, stdev=531.60 00:14:27.884 clat percentiles (usec): 00:14:27.884 | 1.00th=[ 4359], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5014], 00:14:27.884 | 30.00th=[ 5145], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5473], 00:14:27.884 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 6063], 95.00th=[ 6259], 00:14:27.884 | 99.00th=[ 6718], 99.50th=[ 7439], 99.90th=[ 9503], 99.95th=[ 9765], 00:14:27.884 | 99.99th=[10945] 00:14:27.884 bw ( KiB/s): min=41024, max=46544, per=100.00%, avg=44628.00, stdev=2600.07, samples=4 00:14:27.884 iops : min=10256, max=11636, avg=11157.00, stdev=650.02, samples=4 00:14:27.884 lat (msec) : 4=0.29%, 10=99.59%, 20=0.12% 00:14:27.884 cpu : usr=74.86%, sys=20.50%, ctx=27, majf=0, minf=7 00:14:27.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:14:27.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.884 issued rwts: total=22465,22364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.884 00:14:27.884 Run status group 0 (all jobs): 00:14:27.884 READ: bw=43.7MiB/s (45.9MB/s), 43.7MiB/s-43.7MiB/s (45.9MB/s-45.9MB/s), io=87.8MiB (92.0MB), run=2006-2006msec 00:14:27.884 WRITE: bw=43.5MiB/s (45.7MB/s), 43.5MiB/s-43.5MiB/s (45.7MB/s-45.7MB/s), io=87.4MiB (91.6MB), run=2006-2006msec 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:27.884 10:57:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:27.884 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:27.884 fio-3.35 00:14:27.884 Starting 1 thread 00:14:30.499 00:14:30.499 test: (groupid=0, jobs=1): err= 0: pid=74960: Mon Dec 9 10:57:23 2024 00:14:30.499 read: IOPS=8439, BW=132MiB/s (138MB/s)(265MiB/2008msec) 00:14:30.499 slat (nsec): min=2353, max=92228, avg=2717.29, stdev=1525.22 00:14:30.499 clat (usec): min=1826, max=18961, avg=8757.56, stdev=2670.21 00:14:30.499 lat (usec): min=1828, max=18964, avg=8760.27, stdev=2670.26 00:14:30.499 clat percentiles (usec): 00:14:30.499 | 1.00th=[ 3523], 5.00th=[ 4490], 10.00th=[ 5276], 20.00th=[ 6390], 00:14:30.499 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[ 9503], 00:14:30.499 | 70.00th=[10290], 80.00th=[11338], 90.00th=[12256], 95.00th=[13042], 00:14:30.499 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:14:30.499 | 99.99th=[18220] 00:14:30.499 bw ( KiB/s): min=66944, max=72896, per=52.12%, avg=70376.00, stdev=2567.37, samples=4 00:14:30.499 iops : min= 4184, max= 4556, avg=4398.50, stdev=160.46, samples=4 00:14:30.499 write: IOPS=4878, BW=76.2MiB/s (79.9MB/s)(144MiB/1889msec); 0 zone resets 00:14:30.499 slat (usec): min=27, max=428, avg=30.20, stdev=10.13 00:14:30.499 clat (usec): min=5243, max=21225, avg=11006.92, stdev=2357.73 00:14:30.499 lat (usec): min=5271, max=21253, avg=11037.12, stdev=2360.14 00:14:30.499 clat percentiles (usec): 00:14:30.499 | 1.00th=[ 6587], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8979], 00:14:30.499 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:14:30.499 | 70.00th=[11994], 80.00th=[12780], 90.00th=[14222], 95.00th=[15139], 00:14:30.499 | 99.00th=[17695], 99.50th=[18744], 99.90th=[19792], 99.95th=[20055], 00:14:30.499 | 99.99th=[21103] 00:14:30.499 bw ( KiB/s): min=69632, max=75680, per=93.63%, avg=73080.00, stdev=2767.26, samples=4 00:14:30.499 iops : min= 4352, max= 4730, avg=4567.50, stdev=172.95, samples=4 00:14:30.499 lat (msec) : 2=0.01%, 4=1.52%, 10=54.10%, 20=44.35%, 50=0.02% 00:14:30.499 cpu : usr=82.06%, sys=15.30%, ctx=6, majf=0, minf=4 00:14:30.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:30.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:30.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:30.499 issued rwts: total=16947,9215,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:30.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:30.499 00:14:30.499 Run status group 0 (all jobs): 00:14:30.499 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (278MB), run=2008-2008msec 00:14:30.499 WRITE: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=144MiB (151MB), run=1889-1889msec 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.499 rmmod nvme_tcp 00:14:30.499 rmmod nvme_fabrics 00:14:30.499 rmmod nvme_keyring 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74834 ']' 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74834 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 74834 ']' 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 74834 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74834 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74834' 00:14:30.499 killing process with pid 74834 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 74834 00:14:30.499 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 74834 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:30.758 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:31.017 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:31.017 10:57:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:31.017 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:31.017 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.017 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.017 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:14:31.018 00:14:31.018 real 0m8.895s 00:14:31.018 user 0m34.885s 00:14:31.018 sys 0m2.333s 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:31.018 ************************************ 00:14:31.018 END TEST nvmf_fio_host 00:14:31.018 ************************************ 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:31.018 ************************************ 00:14:31.018 START TEST nvmf_failover 00:14:31.018 ************************************ 00:14:31.018 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:31.278 * Looking for test storage... 00:14:31.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.278 --rc genhtml_branch_coverage=1 00:14:31.278 --rc genhtml_function_coverage=1 00:14:31.278 --rc genhtml_legend=1 00:14:31.278 --rc geninfo_all_blocks=1 00:14:31.278 --rc geninfo_unexecuted_blocks=1 00:14:31.278 00:14:31.278 ' 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.278 --rc genhtml_branch_coverage=1 00:14:31.278 --rc genhtml_function_coverage=1 00:14:31.278 --rc genhtml_legend=1 00:14:31.278 --rc geninfo_all_blocks=1 00:14:31.278 --rc geninfo_unexecuted_blocks=1 00:14:31.278 00:14:31.278 ' 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.278 --rc genhtml_branch_coverage=1 00:14:31.278 --rc genhtml_function_coverage=1 00:14:31.278 --rc genhtml_legend=1 00:14:31.278 --rc geninfo_all_blocks=1 00:14:31.278 --rc geninfo_unexecuted_blocks=1 00:14:31.278 00:14:31.278 ' 00:14:31.278 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:31.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.278 --rc genhtml_branch_coverage=1 00:14:31.278 --rc genhtml_function_coverage=1 00:14:31.278 --rc genhtml_legend=1 00:14:31.278 --rc geninfo_all_blocks=1 00:14:31.278 --rc geninfo_unexecuted_blocks=1 00:14:31.278 00:14:31.279 ' 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.279 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.538 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:31.539 Cannot find device "nvmf_init_br" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:31.539 Cannot find device "nvmf_init_br2" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:31.539 Cannot find device "nvmf_tgt_br" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.539 Cannot find device "nvmf_tgt_br2" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:31.539 Cannot find device "nvmf_init_br" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:31.539 Cannot find device "nvmf_init_br2" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:31.539 Cannot find device "nvmf_tgt_br" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:31.539 Cannot find device "nvmf_tgt_br2" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:31.539 Cannot find device "nvmf_br" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:31.539 Cannot find device "nvmf_init_if" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:31.539 Cannot find device "nvmf_init_if2" 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.539 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:31.798 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:31.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:31.799 00:14:31.799 --- 10.0.0.3 ping statistics --- 00:14:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.799 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:31.799 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:31.799 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:14:31.799 00:14:31.799 --- 10.0.0.4 ping statistics --- 00:14:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.799 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:31.799 00:14:31.799 --- 10.0.0.1 ping statistics --- 00:14:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.799 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:31.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:14:31.799 00:14:31.799 --- 10.0.0.2 ping statistics --- 00:14:31.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.799 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75229 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75229 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75229 ']' 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.799 10:57:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:31.799 [2024-12-09 10:57:24.880995] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:31.799 [2024-12-09 10:57:24.881056] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.058 [2024-12-09 10:57:25.029465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.058 [2024-12-09 10:57:25.073726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.058 [2024-12-09 10:57:25.073792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.058 [2024-12-09 10:57:25.073799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.058 [2024-12-09 10:57:25.073804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.058 [2024-12-09 10:57:25.073808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.058 [2024-12-09 10:57:25.074682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.058 [2024-12-09 10:57:25.074817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.058 [2024-12-09 10:57:25.074822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.058 [2024-12-09 10:57:25.116304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.624 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:32.882 [2024-12-09 10:57:25.933691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.882 10:57:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:33.140 Malloc0 00:14:33.140 10:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.399 10:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:33.399 10:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:33.657 [2024-12-09 10:57:26.687024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:33.657 10:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:33.915 [2024-12-09 10:57:26.886785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:33.915 10:57:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:33.915 [2024-12-09 10:57:27.086554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75281 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75281 /var/tmp/bdevperf.sock 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75281 ']' 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.174 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:35.109 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.109 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:35.109 10:57:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:35.109 NVMe0n1 00:14:35.109 10:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:35.367 00:14:35.367 10:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75305 00:14:35.367 10:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:35.367 10:57:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:14:36.333 10:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:36.592 10:57:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:14:39.880 10:57:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:39.880 00:14:39.880 10:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:40.149 [2024-12-09 10:57:33.204099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54d930 is same with the state(6) to be set 00:14:40.149 [2024-12-09 10:57:33.204147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54d930 is same with the state(6) to be set 00:14:40.149 [2024-12-09 10:57:33.204154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54d930 is same with the state(6) to be set 00:14:40.149 10:57:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:14:43.441 10:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:43.441 [2024-12-09 10:57:36.417818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:43.441 10:57:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:14:44.379 10:57:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:44.638 10:57:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75305 00:14:51.221 { 00:14:51.221 "results": [ 00:14:51.221 { 00:14:51.221 "job": "NVMe0n1", 00:14:51.221 "core_mask": "0x1", 00:14:51.221 "workload": "verify", 00:14:51.221 "status": "finished", 00:14:51.221 "verify_range": { 00:14:51.221 "start": 0, 00:14:51.221 "length": 16384 00:14:51.221 }, 00:14:51.221 "queue_depth": 128, 00:14:51.221 "io_size": 4096, 00:14:51.221 "runtime": 15.007859, 00:14:51.221 "iops": 11486.58179691054, 00:14:51.221 "mibps": 44.869460144181794, 00:14:51.221 "io_failed": 3885, 00:14:51.221 "io_timeout": 0, 00:14:51.221 "avg_latency_us": 10874.686778428957, 00:14:51.221 "min_latency_us": 447.16157205240177, 00:14:51.221 "max_latency_us": 26099.926637554585 00:14:51.221 } 00:14:51.221 ], 00:14:51.221 "core_count": 1 00:14:51.221 } 00:14:51.221 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75281 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75281 ']' 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75281 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75281 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.222 killing process with pid 75281 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75281' 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75281 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75281 00:14:51.222 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:51.222 [2024-12-09 10:57:27.152513] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:51.222 [2024-12-09 10:57:27.152592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75281 ] 00:14:51.222 [2024-12-09 10:57:27.279917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.222 [2024-12-09 10:57:27.326497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.222 [2024-12-09 10:57:27.366511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.222 Running I/O for 15 seconds... 00:14:51.222 11744.00 IOPS, 45.88 MiB/s [2024-12-09T10:57:44.401Z] [2024-12-09 10:57:29.709321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.222 [2024-12-09 10:57:29.709512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.222 [2024-12-09 10:57:29.709934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.222 [2024-12-09 10:57:29.709943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.709951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.709961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.709969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.709979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.709987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.709996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.223 [2024-12-09 10:57:29.710570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.223 [2024-12-09 10:57:29.710606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.223 [2024-12-09 10:57:29.710615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.710991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.224 [2024-12-09 10:57:29.711017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.224 [2024-12-09 10:57:29.711269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.224 [2024-12-09 10:57:29.711277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.225 [2024-12-09 10:57:29.711295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.225 [2024-12-09 10:57:29.711313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.225 [2024-12-09 10:57:29.711587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x841b20 is same with the state(6) to be set 00:14:51.225 [2024-12-09 10:57:29.711608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106936 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107264 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107272 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107280 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107288 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107296 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107304 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107312 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.225 [2024-12-09 10:57:29.711862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.225 [2024-12-09 10:57:29.711869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107320 len:8 PRP1 0x0 PRP2 0x0 00:14:51.225 [2024-12-09 10:57:29.711877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711924] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:51.225 [2024-12-09 10:57:29.711967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.225 [2024-12-09 10:57:29.711977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.711987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.225 [2024-12-09 10:57:29.712002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.225 [2024-12-09 10:57:29.712012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.225 [2024-12-09 10:57:29.712021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:29.712030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.226 [2024-12-09 10:57:29.712038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:29.712047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:14:51.226 [2024-12-09 10:57:29.714800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:51.226 [2024-12-09 10:57:29.714833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2c60 (9): Bad file descriptor 00:14:51.226 [2024-12-09 10:57:29.744527] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:14:51.226 11691.00 IOPS, 45.67 MiB/s [2024-12-09T10:57:44.405Z] 11847.33 IOPS, 46.28 MiB/s [2024-12-09T10:57:44.405Z] 11897.50 IOPS, 46.47 MiB/s [2024-12-09T10:57:44.405Z] [2024-12-09 10:57:33.204521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.204566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.204594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.204614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.204633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.226 [2024-12-09 10:57:33.204977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.204987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.204995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.226 [2024-12-09 10:57:33.205133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.226 [2024-12-09 10:57:33.205143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.227 [2024-12-09 10:57:33.205760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.227 [2024-12-09 10:57:33.205878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.227 [2024-12-09 10:57:33.205887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.205897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.205906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.205916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.205924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.205934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.205943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.205953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.205962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.205971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.205980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.205990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.228 [2024-12-09 10:57:33.206551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.228 [2024-12-09 10:57:33.206600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.228 [2024-12-09 10:57:33.206608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.229 [2024-12-09 10:57:33.206628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.229 [2024-12-09 10:57:33.206646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.229 [2024-12-09 10:57:33.206665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.229 [2024-12-09 10:57:33.206684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x845c90 is same with the state(6) to be set 00:14:51.229 [2024-12-09 10:57:33.206704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61000 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61520 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61528 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61536 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61544 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61552 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61560 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61568 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.206968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.206975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61576 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.206988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.206997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61584 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61608 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61616 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61624 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207182] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61632 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61640 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61648 len:8 PRP1 0x0 PRP2 0x0 00:14:51.229 [2024-12-09 10:57:33.207264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.229 [2024-12-09 10:57:33.207272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.229 [2024-12-09 10:57:33.207278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.229 [2024-12-09 10:57:33.207284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61656 len:8 PRP1 0x0 PRP2 0x0 00:14:51.230 [2024-12-09 10:57:33.207292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.207300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.230 [2024-12-09 10:57:33.207306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.230 [2024-12-09 10:57:33.207312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61664 len:8 PRP1 0x0 PRP2 0x0 00:14:51.230 [2024-12-09 10:57:33.207325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.207334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.230 [2024-12-09 10:57:33.207340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.230 [2024-12-09 10:57:33.207345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61672 len:8 PRP1 0x0 PRP2 0x0 00:14:51.230 [2024-12-09 10:57:33.207353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.224208] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:14:51.230 [2024-12-09 10:57:33.224297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.230 [2024-12-09 10:57:33.224316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.224332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.230 [2024-12-09 10:57:33.224345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.224358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.230 [2024-12-09 10:57:33.224369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.224383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.230 [2024-12-09 10:57:33.224394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:33.224406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:14:51.230 [2024-12-09 10:57:33.224461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2c60 (9): Bad file descriptor 00:14:51.230 [2024-12-09 10:57:33.228795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:14:51.230 [2024-12-09 10:57:33.253380] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:14:51.230 11760.60 IOPS, 45.94 MiB/s [2024-12-09T10:57:44.409Z] 11765.67 IOPS, 45.96 MiB/s [2024-12-09T10:57:44.409Z] 11775.57 IOPS, 46.00 MiB/s [2024-12-09T10:57:44.409Z] 11786.62 IOPS, 46.04 MiB/s [2024-12-09T10:57:44.409Z] 11811.78 IOPS, 46.14 MiB/s [2024-12-09T10:57:44.409Z] [2024-12-09 10:57:37.632627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.632850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.632870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.632889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.632908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.632948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.632967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.632977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.632987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.633017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.230 [2024-12-09 10:57:37.633034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.230 [2024-12-09 10:57:37.633203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.230 [2024-12-09 10:57:37.633213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.231 [2024-12-09 10:57:37.633545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.231 [2024-12-09 10:57:37.633805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.231 [2024-12-09 10:57:37.633815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.633823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.633840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.633858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.633881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.633899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.633917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.633935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.633953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.633971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.633988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.633998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.232 [2024-12-09 10:57:37.634472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.232 [2024-12-09 10:57:37.634508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.232 [2024-12-09 10:57:37.634517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.233 [2024-12-09 10:57:37.634743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x845950 is same with the state(6) to be set 00:14:51.233 [2024-12-09 10:57:37.634764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92880 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93440 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93448 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93456 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93464 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93472 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.634973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.634978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.634985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93480 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.634993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93488 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93496 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93504 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93512 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93520 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92888 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.233 [2024-12-09 10:57:37.635186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.233 [2024-12-09 10:57:37.635192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.233 [2024-12-09 10:57:37.635198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92896 len:8 PRP1 0x0 PRP2 0x0 00:14:51.233 [2024-12-09 10:57:37.635206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.234 [2024-12-09 10:57:37.635221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.234 [2024-12-09 10:57:37.635228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92904 len:8 PRP1 0x0 PRP2 0x0 00:14:51.234 [2024-12-09 10:57:37.635236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.234 [2024-12-09 10:57:37.635251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.234 [2024-12-09 10:57:37.635257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92912 len:8 PRP1 0x0 PRP2 0x0 00:14:51.234 [2024-12-09 10:57:37.635265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.234 [2024-12-09 10:57:37.635281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.234 [2024-12-09 10:57:37.635287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:14:51.234 [2024-12-09 10:57:37.635296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.234 [2024-12-09 10:57:37.635311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.234 [2024-12-09 10:57:37.635317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:14:51.234 [2024-12-09 10:57:37.635332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.234 [2024-12-09 10:57:37.635347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.234 [2024-12-09 10:57:37.635354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:14:51.234 [2024-12-09 10:57:37.635362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:14:51.234 [2024-12-09 10:57:37.635377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:14:51.234 [2024-12-09 10:57:37.635383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:14:51.234 [2024-12-09 10:57:37.635392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635439] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:14:51.234 [2024-12-09 10:57:37.635483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.234 [2024-12-09 10:57:37.635494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.234 [2024-12-09 10:57:37.635513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.635522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.234 [2024-12-09 10:57:37.635531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.651803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:51.234 [2024-12-09 10:57:37.651854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:51.234 [2024-12-09 10:57:37.651870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:14:51.234 [2024-12-09 10:57:37.651929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d2c60 (9): Bad file descriptor 00:14:51.234 [2024-12-09 10:57:37.655735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:14:51.234 [2024-12-09 10:57:37.677287] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:14:51.234 11731.20 IOPS, 45.83 MiB/s [2024-12-09T10:57:44.413Z] 11673.09 IOPS, 45.60 MiB/s [2024-12-09T10:57:44.413Z] 11609.00 IOPS, 45.35 MiB/s [2024-12-09T10:57:44.413Z] 11557.31 IOPS, 45.15 MiB/s [2024-12-09T10:57:44.413Z] 11514.00 IOPS, 44.98 MiB/s [2024-12-09T10:57:44.413Z] 11487.20 IOPS, 44.87 MiB/s 00:14:51.234 Latency(us) 00:14:51.234 [2024-12-09T10:57:44.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.234 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:51.234 Verification LBA range: start 0x0 length 0x4000 00:14:51.234 NVMe0n1 : 15.01 11486.58 44.87 258.86 0.00 10874.69 447.16 26099.93 00:14:51.234 [2024-12-09T10:57:44.413Z] =================================================================================================================== 00:14:51.234 [2024-12-09T10:57:44.413Z] Total : 11486.58 44.87 258.86 0.00 10874.69 447.16 26099.93 00:14:51.234 Received shutdown signal, test time was about 15.000000 seconds 00:14:51.234 00:14:51.234 Latency(us) 00:14:51.234 [2024-12-09T10:57:44.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.234 [2024-12-09T10:57:44.413Z] =================================================================================================================== 00:14:51.234 [2024-12-09T10:57:44.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75484 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75484 /var/tmp/bdevperf.sock 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75484 ']' 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.234 10:57:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:51.802 10:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.802 10:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:14:51.802 10:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:51.802 [2024-12-09 10:57:44.956911] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:51.802 10:57:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:52.061 [2024-12-09 10:57:45.144670] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:52.061 10:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:52.320 NVMe0n1 00:14:52.320 10:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:52.579 00:14:52.579 10:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:52.839 00:14:52.839 10:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:52.839 10:57:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:14:53.099 10:57:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:53.359 10:57:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:14:56.653 10:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:56.653 10:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:14:56.653 10:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75555 00:14:56.653 10:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:56.653 10:57:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75555 00:14:57.591 { 00:14:57.591 "results": [ 00:14:57.591 { 00:14:57.591 "job": "NVMe0n1", 00:14:57.591 "core_mask": "0x1", 00:14:57.591 "workload": "verify", 00:14:57.591 "status": "finished", 00:14:57.591 "verify_range": { 00:14:57.591 "start": 0, 00:14:57.591 "length": 16384 00:14:57.591 }, 00:14:57.591 "queue_depth": 128, 00:14:57.591 "io_size": 4096, 00:14:57.591 "runtime": 1.007765, 00:14:57.591 "iops": 8867.146606599754, 00:14:57.591 "mibps": 34.63729143203029, 00:14:57.591 "io_failed": 0, 00:14:57.591 "io_timeout": 0, 00:14:57.591 "avg_latency_us": 14358.72157252153, 00:14:57.591 "min_latency_us": 937.2506550218341, 00:14:57.591 "max_latency_us": 13164.436681222707 00:14:57.591 } 00:14:57.591 ], 00:14:57.591 "core_count": 1 00:14:57.591 } 00:14:57.591 10:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:14:57.591 [2024-12-09 10:57:43.898166] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:14:57.591 [2024-12-09 10:57:43.898269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75484 ] 00:14:57.591 [2024-12-09 10:57:44.049033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.591 [2024-12-09 10:57:44.097985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.592 [2024-12-09 10:57:44.138636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.592 [2024-12-09 10:57:46.337510] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:14:57.592 [2024-12-09 10:57:46.337616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.592 [2024-12-09 10:57:46.337633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.592 [2024-12-09 10:57:46.337646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.592 [2024-12-09 10:57:46.337656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.592 [2024-12-09 10:57:46.337665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.592 [2024-12-09 10:57:46.337675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.592 [2024-12-09 10:57:46.337684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:57.592 [2024-12-09 10:57:46.337693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:57.592 [2024-12-09 10:57:46.337701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:14:57.592 [2024-12-09 10:57:46.337740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:14:57.592 [2024-12-09 10:57:46.337767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61ac60 (9): Bad file descriptor 00:14:57.592 [2024-12-09 10:57:46.344092] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:14:57.592 Running I/O for 1 seconds... 00:14:57.592 8800.00 IOPS, 34.38 MiB/s 00:14:57.592 Latency(us) 00:14:57.592 [2024-12-09T10:57:50.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.592 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:57.592 Verification LBA range: start 0x0 length 0x4000 00:14:57.592 NVMe0n1 : 1.01 8867.15 34.64 0.00 0.00 14358.72 937.25 13164.44 00:14:57.592 [2024-12-09T10:57:50.771Z] =================================================================================================================== 00:14:57.592 [2024-12-09T10:57:50.771Z] Total : 8867.15 34.64 0.00 0.00 14358.72 937.25 13164.44 00:14:57.592 10:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:57.592 10:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:14:57.851 10:57:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:58.110 10:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:58.110 10:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:14:58.369 10:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:58.629 10:57:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75484 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75484 ']' 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75484 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75484 00:15:01.920 killing process with pid 75484 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75484' 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75484 00:15:01.920 10:57:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75484 00:15:02.178 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:02.178 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:02.437 rmmod nvme_tcp 00:15:02.437 rmmod nvme_fabrics 00:15:02.437 rmmod nvme_keyring 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75229 ']' 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75229 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75229 ']' 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75229 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75229 00:15:02.437 killing process with pid 75229 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75229' 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75229 00:15:02.437 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75229 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:02.696 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:02.956 ************************************ 00:15:02.956 END TEST nvmf_failover 00:15:02.956 ************************************ 00:15:02.956 00:15:02.956 real 0m31.776s 00:15:02.956 user 2m1.795s 00:15:02.956 sys 0m4.982s 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.956 10:57:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:02.956 10:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:02.956 10:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.956 10:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.956 10:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:02.956 ************************************ 00:15:02.956 START TEST nvmf_host_discovery 00:15:02.956 ************************************ 00:15:02.956 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:03.216 * Looking for test storage... 00:15:03.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.216 --rc genhtml_branch_coverage=1 00:15:03.216 --rc genhtml_function_coverage=1 00:15:03.216 --rc genhtml_legend=1 00:15:03.216 --rc geninfo_all_blocks=1 00:15:03.216 --rc geninfo_unexecuted_blocks=1 00:15:03.216 00:15:03.216 ' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.216 --rc genhtml_branch_coverage=1 00:15:03.216 --rc genhtml_function_coverage=1 00:15:03.216 --rc genhtml_legend=1 00:15:03.216 --rc geninfo_all_blocks=1 00:15:03.216 --rc geninfo_unexecuted_blocks=1 00:15:03.216 00:15:03.216 ' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.216 --rc genhtml_branch_coverage=1 00:15:03.216 --rc genhtml_function_coverage=1 00:15:03.216 --rc genhtml_legend=1 00:15:03.216 --rc geninfo_all_blocks=1 00:15:03.216 --rc geninfo_unexecuted_blocks=1 00:15:03.216 00:15:03.216 ' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.216 --rc genhtml_branch_coverage=1 00:15:03.216 --rc genhtml_function_coverage=1 00:15:03.216 --rc genhtml_legend=1 00:15:03.216 --rc geninfo_all_blocks=1 00:15:03.216 --rc geninfo_unexecuted_blocks=1 00:15:03.216 00:15:03.216 ' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.216 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:03.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:03.217 Cannot find device "nvmf_init_br" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:03.217 Cannot find device "nvmf_init_br2" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:03.217 Cannot find device "nvmf_tgt_br" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.217 Cannot find device "nvmf_tgt_br2" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:03.217 Cannot find device "nvmf_init_br" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:03.217 Cannot find device "nvmf_init_br2" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:03.217 Cannot find device "nvmf_tgt_br" 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:03.217 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:03.478 Cannot find device "nvmf_tgt_br2" 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:03.478 Cannot find device "nvmf_br" 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:03.478 Cannot find device "nvmf_init_if" 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:03.478 Cannot find device "nvmf_init_if2" 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:03.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:03.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:03.478 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:03.478 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:15:03.478 00:15:03.478 --- 10.0.0.3 ping statistics --- 00:15:03.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.478 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:03.478 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:03.478 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:15:03.478 00:15:03.478 --- 10.0.0.4 ping statistics --- 00:15:03.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.478 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:03.478 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:03.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:03.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:15:03.478 00:15:03.478 --- 10.0.0.1 ping statistics --- 00:15:03.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.478 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:03.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:03.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:15:03.479 00:15:03.479 --- 10.0.0.2 ping statistics --- 00:15:03.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:03.479 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:03.479 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75883 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75883 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75883 ']' 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.738 10:57:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.738 [2024-12-09 10:57:56.716140] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:15:03.738 [2024-12-09 10:57:56.716199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.738 [2024-12-09 10:57:56.869398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.738 [2024-12-09 10:57:56.912913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.738 [2024-12-09 10:57:56.912967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.738 [2024-12-09 10:57:56.912975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.738 [2024-12-09 10:57:56.912980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.738 [2024-12-09 10:57:56.912985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.738 [2024-12-09 10:57:56.913324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.996 [2024-12-09 10:57:56.954627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 [2024-12-09 10:57:57.603560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 [2024-12-09 10:57:57.615645] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 null0 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 null1 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75911 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75911 /tmp/host.sock 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 75911 ']' 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:04.563 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:04.563 10:57:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:04.563 [2024-12-09 10:57:57.695865] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:15:04.563 [2024-12-09 10:57:57.695939] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75911 ] 00:15:04.821 [2024-12-09 10:57:57.845303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.821 [2024-12-09 10:57:57.893414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.821 [2024-12-09 10:57:57.933582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.388 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.388 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:05.388 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.389 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.647 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 [2024-12-09 10:57:58.893438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:05.906 10:57:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:05.907 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:06.164 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.164 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:15:06.164 10:57:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:15:06.423 [2024-12-09 10:57:59.553920] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:06.423 [2024-12-09 10:57:59.553957] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:06.423 [2024-12-09 10:57:59.553997] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:06.423 [2024-12-09 10:57:59.559944] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:06.681 [2024-12-09 10:57:59.614188] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:06.681 [2024-12-09 10:57:59.615153] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a66da0:1 started. 00:15:06.681 [2024-12-09 10:57:59.616700] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:06.681 [2024-12-09 10:57:59.616725] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:06.681 [2024-12-09 10:57:59.622484] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a66da0 was disconnected and freed. delete nvme_qpair. 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:07.246 [2024-12-09 10:58:00.354181] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a75190:1 started. 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:07.246 [2024-12-09 10:58:00.361442] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a75190 was disconnected and freed. delete nvme_qpair. 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.246 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.247 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:07.247 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:07.247 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.247 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.247 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.247 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.512 [2024-12-09 10:58:00.455736] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:07.512 [2024-12-09 10:58:00.456286] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:07.512 [2024-12-09 10:58:00.456319] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:07.512 [2024-12-09 10:58:00.462262] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:07.512 [2024-12-09 10:58:00.525454] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:07.512 [2024-12-09 10:58:00.525505] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:07.512 [2024-12-09 10:58:00.525513] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:07.512 [2024-12-09 10:58:00.525517] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.512 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 [2024-12-09 10:58:00.688586] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:07.786 [2024-12-09 10:58:00.688619] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:07.786 [2024-12-09 10:58:00.692207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 [2024-12-09 10:58:00.692634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.786 [2024-12-09 10:58:00.692655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.786 [2024-12-09 10:58:00.692662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.786 [2024-12-09 10:58:00.692671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.786 [2024-12-09 10:58:00.692679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:07.786 [2024-12-09 10:58:00.692685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:07.786 [2024-12-09 10:58:00.692691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:07.786 [2024-12-09 10:58:00.692697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a42fb0 is same with the state(6) to be set 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:07.786 [2024-12-09 10:58:00.694754] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:07.786 [2024-12-09 10:58:00.694777] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:07.786 [2024-12-09 10:58:00.694821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a42fb0 (9): Bad file descriptor 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:07.786 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:08.044 10:58:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.044 10:58:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:08.978 [2024-12-09 10:58:02.080060] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:08.978 [2024-12-09 10:58:02.080093] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:08.978 [2024-12-09 10:58:02.080107] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:08.978 [2024-12-09 10:58:02.086069] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:08.978 [2024-12-09 10:58:02.144211] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:08.978 [2024-12-09 10:58:02.144911] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1a69c00:1 started. 00:15:08.978 [2024-12-09 10:58:02.146666] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:08.978 [2024-12-09 10:58:02.146706] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.978 [2024-12-09 10:58:02.148976] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1a69c00 was disconnected and freed. delete nvme_qpair. 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.978 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.236 request: 00:15:09.236 { 00:15:09.236 "name": "nvme", 00:15:09.236 "trtype": "tcp", 00:15:09.236 "traddr": "10.0.0.3", 00:15:09.236 "adrfam": "ipv4", 00:15:09.236 "trsvcid": "8009", 00:15:09.236 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:09.236 "wait_for_attach": true, 00:15:09.236 "method": "bdev_nvme_start_discovery", 00:15:09.236 "req_id": 1 00:15:09.236 } 00:15:09.236 Got JSON-RPC error response 00:15:09.236 response: 00:15:09.236 { 00:15:09.236 "code": -17, 00:15:09.236 "message": "File exists" 00:15:09.236 } 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:09.236 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.237 request: 00:15:09.237 { 00:15:09.237 "name": "nvme_second", 00:15:09.237 "trtype": "tcp", 00:15:09.237 "traddr": "10.0.0.3", 00:15:09.237 "adrfam": "ipv4", 00:15:09.237 "trsvcid": "8009", 00:15:09.237 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:09.237 "wait_for_attach": true, 00:15:09.237 "method": "bdev_nvme_start_discovery", 00:15:09.237 "req_id": 1 00:15:09.237 } 00:15:09.237 Got JSON-RPC error response 00:15:09.237 response: 00:15:09.237 { 00:15:09.237 "code": -17, 00:15:09.237 "message": "File exists" 00:15:09.237 } 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.237 10:58:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:10.611 [2024-12-09 10:58:03.388855] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:10.611 [2024-12-09 10:58:03.389235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a66bb0 with addr=10.0.0.3, port=8010 00:15:10.611 [2024-12-09 10:58:03.389320] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:10.611 [2024-12-09 10:58:03.389376] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:10.611 [2024-12-09 10:58:03.389429] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:11.543 [2024-12-09 10:58:04.386925] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:11.543 [2024-12-09 10:58:04.387141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a765d0 with addr=10.0.0.3, port=8010 00:15:11.543 [2024-12-09 10:58:04.387204] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:11.543 [2024-12-09 10:58:04.387244] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:11.543 [2024-12-09 10:58:04.387279] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:12.475 [2024-12-09 10:58:05.384879] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:12.475 request: 00:15:12.475 { 00:15:12.475 "name": "nvme_second", 00:15:12.475 "trtype": "tcp", 00:15:12.475 "traddr": "10.0.0.3", 00:15:12.475 "adrfam": "ipv4", 00:15:12.475 "trsvcid": "8010", 00:15:12.475 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:12.475 "wait_for_attach": false, 00:15:12.475 "attach_timeout_ms": 3000, 00:15:12.475 "method": "bdev_nvme_start_discovery", 00:15:12.475 "req_id": 1 00:15:12.475 } 00:15:12.475 Got JSON-RPC error response 00:15:12.475 response: 00:15:12.475 { 00:15:12.475 "code": -110, 00:15:12.475 "message": "Connection timed out" 00:15:12.475 } 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:12.475 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75911 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.476 rmmod nvme_tcp 00:15:12.476 rmmod nvme_fabrics 00:15:12.476 rmmod nvme_keyring 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75883 ']' 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75883 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 75883 ']' 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 75883 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75883 00:15:12.476 killing process with pid 75883 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75883' 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 75883 00:15:12.476 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 75883 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:12.735 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:12.997 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:12.997 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:12.997 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:12.997 10:58:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:12.997 00:15:12.997 real 0m10.093s 00:15:12.997 user 0m18.487s 00:15:12.997 sys 0m2.196s 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.997 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.997 ************************************ 00:15:12.997 END TEST nvmf_host_discovery 00:15:12.997 ************************************ 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:13.256 ************************************ 00:15:13.256 START TEST nvmf_host_multipath_status 00:15:13.256 ************************************ 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:13.256 * Looking for test storage... 00:15:13.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.256 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.257 --rc genhtml_branch_coverage=1 00:15:13.257 --rc genhtml_function_coverage=1 00:15:13.257 --rc genhtml_legend=1 00:15:13.257 --rc geninfo_all_blocks=1 00:15:13.257 --rc geninfo_unexecuted_blocks=1 00:15:13.257 00:15:13.257 ' 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.257 --rc genhtml_branch_coverage=1 00:15:13.257 --rc genhtml_function_coverage=1 00:15:13.257 --rc genhtml_legend=1 00:15:13.257 --rc geninfo_all_blocks=1 00:15:13.257 --rc geninfo_unexecuted_blocks=1 00:15:13.257 00:15:13.257 ' 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.257 --rc genhtml_branch_coverage=1 00:15:13.257 --rc genhtml_function_coverage=1 00:15:13.257 --rc genhtml_legend=1 00:15:13.257 --rc geninfo_all_blocks=1 00:15:13.257 --rc geninfo_unexecuted_blocks=1 00:15:13.257 00:15:13.257 ' 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:13.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.257 --rc genhtml_branch_coverage=1 00:15:13.257 --rc genhtml_function_coverage=1 00:15:13.257 --rc genhtml_legend=1 00:15:13.257 --rc geninfo_all_blocks=1 00:15:13.257 --rc geninfo_unexecuted_blocks=1 00:15:13.257 00:15:13.257 ' 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:15:13.257 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.516 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.516 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:13.517 Cannot find device "nvmf_init_br" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:13.517 Cannot find device "nvmf_init_br2" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:13.517 Cannot find device "nvmf_tgt_br" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:13.517 Cannot find device "nvmf_tgt_br2" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:13.517 Cannot find device "nvmf_init_br" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:13.517 Cannot find device "nvmf_init_br2" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:13.517 Cannot find device "nvmf_tgt_br" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:13.517 Cannot find device "nvmf_tgt_br2" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:13.517 Cannot find device "nvmf_br" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:13.517 Cannot find device "nvmf_init_if" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:13.517 Cannot find device "nvmf_init_if2" 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:13.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:13.517 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:13.517 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:13.776 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:13.776 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:13.776 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:13.776 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:13.776 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:13.776 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:13.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:15:13.777 00:15:13.777 --- 10.0.0.3 ping statistics --- 00:15:13.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.777 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:13.777 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:13.777 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:15:13.777 00:15:13.777 --- 10.0.0.4 ping statistics --- 00:15:13.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.777 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:15:13.777 00:15:13.777 --- 10.0.0.1 ping statistics --- 00:15:13.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.777 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:13.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:15:13.777 00:15:13.777 --- 10.0.0.2 ping statistics --- 00:15:13.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.777 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76421 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76421 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76421 ']' 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.777 10:58:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:13.777 [2024-12-09 10:58:06.893177] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:15:13.777 [2024-12-09 10:58:06.893238] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.035 [2024-12-09 10:58:07.047576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:14.035 [2024-12-09 10:58:07.093410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.035 [2024-12-09 10:58:07.093484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.035 [2024-12-09 10:58:07.093491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.036 [2024-12-09 10:58:07.093495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.036 [2024-12-09 10:58:07.093499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.036 [2024-12-09 10:58:07.094329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.036 [2024-12-09 10:58:07.094329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.036 [2024-12-09 10:58:07.135411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.601 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.601 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:14.601 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.601 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.601 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:14.859 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.859 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76421 00:15:14.859 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:14.859 [2024-12-09 10:58:07.976237] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.859 10:58:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:15.117 Malloc0 00:15:15.117 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:15.374 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.632 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:15.632 [2024-12-09 10:58:08.769815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:15.632 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:15.890 [2024-12-09 10:58:08.945558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76467 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76467 /var/tmp/bdevperf.sock 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76467 ']' 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.890 10:58:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:16.147 10:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.147 10:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:15:16.147 10:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:16.405 10:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:16.662 Nvme0n1 00:15:16.662 10:58:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:16.920 Nvme0n1 00:15:16.920 10:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:16.920 10:58:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:19.449 10:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:19.449 10:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:19.449 10:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:19.449 10:58:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:20.393 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:20.393 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:20.393 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.393 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:20.651 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.651 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:20.651 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.651 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:20.909 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:20.909 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:20.909 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.909 10:58:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:20.909 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:20.909 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:20.909 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:20.909 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:21.167 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.167 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:21.167 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.167 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:21.425 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.425 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:21.425 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:21.425 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:21.683 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:21.683 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:21.683 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:21.683 10:58:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:21.942 10:58:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:22.875 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:22.875 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:22.875 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:22.875 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:23.132 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:23.132 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:23.133 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:23.133 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.391 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.391 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:23.391 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.391 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:23.650 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.650 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:23.650 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:23.650 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.908 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.908 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:23.908 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.908 10:58:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:23.908 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:23.908 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:23.908 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:23.908 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:24.167 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:24.167 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:24.167 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:24.425 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:24.683 10:58:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:25.618 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:25.618 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:25.618 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.618 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:25.876 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:25.876 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:25.876 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.876 10:58:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:25.876 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:25.876 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:25.876 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:25.876 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:26.135 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.135 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:26.135 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:26.135 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.394 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.394 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:26.394 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.394 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:26.653 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.653 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:26.653 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:26.653 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:26.912 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:26.912 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:26.912 10:58:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:26.912 10:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:27.170 10:58:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.545 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:28.803 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:28.803 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:28.803 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:28.803 10:58:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:29.062 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.062 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:29.062 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.062 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:29.320 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:29.579 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:29.837 10:58:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:30.771 10:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:30.772 10:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:30.772 10:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:30.772 10:58:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:31.030 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.030 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:31.030 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.030 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:31.288 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.288 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:31.288 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.288 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:31.547 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:31.806 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:31.806 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:31.806 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:31.806 10:58:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:32.066 10:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:32.066 10:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:32.066 10:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:32.354 10:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:32.354 10:58:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:15:33.727 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:15:33.727 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:33.728 10:58:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.986 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:33.986 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:33.986 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:33.986 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:34.244 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:34.244 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:34.244 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:34.244 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.503 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:34.503 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:34.503 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:34.503 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:34.761 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:34.761 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:15:34.761 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:15:34.761 10:58:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:35.020 10:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:35.278 10:58:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:15:36.214 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:15:36.214 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:36.214 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.214 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:36.472 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.472 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:36.472 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.472 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:36.730 10:58:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:36.989 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:36.989 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:36.989 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:36.989 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.247 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.247 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:37.247 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:37.247 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:37.505 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:37.505 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:15:37.505 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:37.763 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:37.763 10:58:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:15:39.139 10:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:15:39.139 10:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:39.139 10:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.139 10:58:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.139 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:39.397 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.397 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:39.397 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:39.397 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.655 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.655 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:39.655 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:39.655 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:39.914 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:39.914 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:39.914 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:39.914 10:58:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:40.172 10:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:40.172 10:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:15:40.172 10:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:40.172 10:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:40.431 10:58:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:15:41.367 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:15:41.367 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:41.367 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.367 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:41.626 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.626 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:41.626 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:41.626 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:41.884 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:41.884 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:41.884 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:41.884 10:58:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.142 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:42.399 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.399 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:42.399 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:42.399 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:42.657 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:42.657 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:15:42.657 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:42.915 10:58:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:43.174 10:58:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:15:44.108 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:15:44.109 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:44.109 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.109 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:44.367 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.367 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:44.367 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.367 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.626 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:44.884 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.884 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:44.884 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.884 10:58:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:45.142 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.142 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:45.142 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.142 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76467 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76467 ']' 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76467 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76467 00:15:45.400 killing process with pid 76467 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76467' 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76467 00:15:45.400 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76467 00:15:45.400 { 00:15:45.400 "results": [ 00:15:45.400 { 00:15:45.400 "job": "Nvme0n1", 00:15:45.400 "core_mask": "0x4", 00:15:45.400 "workload": "verify", 00:15:45.400 "status": "terminated", 00:15:45.400 "verify_range": { 00:15:45.400 "start": 0, 00:15:45.400 "length": 16384 00:15:45.400 }, 00:15:45.400 "queue_depth": 128, 00:15:45.400 "io_size": 4096, 00:15:45.400 "runtime": 28.341176, 00:15:45.400 "iops": 11304.294500693972, 00:15:45.400 "mibps": 44.15740039333583, 00:15:45.400 "io_failed": 0, 00:15:45.400 "io_timeout": 0, 00:15:45.400 "avg_latency_us": 11302.094697675568, 00:15:45.400 "min_latency_us": 105.53013100436681, 00:15:45.400 "max_latency_us": 3018433.6209606985 00:15:45.400 } 00:15:45.400 ], 00:15:45.400 "core_count": 1 00:15:45.400 } 00:15:45.662 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76467 00:15:45.662 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.662 [2024-12-09 10:58:08.992314] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:15:45.662 [2024-12-09 10:58:08.992385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76467 ] 00:15:45.662 [2024-12-09 10:58:09.123675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.662 [2024-12-09 10:58:09.170563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.662 [2024-12-09 10:58:09.212814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:45.662 Running I/O for 90 seconds... 00:15:45.662 9386.00 IOPS, 36.66 MiB/s [2024-12-09T10:58:38.841Z] 9557.50 IOPS, 37.33 MiB/s [2024-12-09T10:58:38.841Z] 9614.33 IOPS, 37.56 MiB/s [2024-12-09T10:58:38.841Z] 9640.00 IOPS, 37.66 MiB/s [2024-12-09T10:58:38.841Z] 9749.60 IOPS, 38.08 MiB/s [2024-12-09T10:58:38.841Z] 10145.67 IOPS, 39.63 MiB/s [2024-12-09T10:58:38.841Z] 10404.00 IOPS, 40.64 MiB/s [2024-12-09T10:58:38.841Z] 10637.12 IOPS, 41.55 MiB/s [2024-12-09T10:58:38.841Z] 10845.00 IOPS, 42.36 MiB/s [2024-12-09T10:58:38.841Z] 10978.10 IOPS, 42.88 MiB/s [2024-12-09T10:58:38.841Z] 11110.64 IOPS, 43.40 MiB/s [2024-12-09T10:58:38.841Z] 11219.42 IOPS, 43.83 MiB/s [2024-12-09T10:58:38.841Z] [2024-12-09 10:58:22.682801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.662 [2024-12-09 10:58:22.682860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:15:45.662 [2024-12-09 10:58:22.682905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.662 [2024-12-09 10:58:22.682916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:15:45.662 [2024-12-09 10:58:22.682932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.662 [2024-12-09 10:58:22.682942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.682956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.682965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.682980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.682989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.663 [2024-12-09 10:58:22.683323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.663 [2024-12-09 10:58:22.683719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:15:45.663 [2024-12-09 10:58:22.683735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.683753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.683779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.683803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.683827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.683852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.683887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.683922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.683955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.683976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.683990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.664 [2024-12-09 10:58:22.684520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.664 [2024-12-09 10:58:22.684663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:45.664 [2024-12-09 10:58:22.684678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.684938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.684965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.684980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.684989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.685018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.685043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.685068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.685093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.685117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.665 [2024-12-09 10:58:22.685140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.665 [2024-12-09 10:58:22.685456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:45.665 [2024-12-09 10:58:22.685470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.685694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.685704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:22.686288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:22.686807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:22.686817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:15:45.666 10913.00 IOPS, 42.63 MiB/s [2024-12-09T10:58:38.845Z] 10133.50 IOPS, 39.58 MiB/s [2024-12-09T10:58:38.845Z] 9457.93 IOPS, 36.95 MiB/s [2024-12-09T10:58:38.845Z] 9182.62 IOPS, 35.87 MiB/s [2024-12-09T10:58:38.845Z] 9371.65 IOPS, 36.61 MiB/s [2024-12-09T10:58:38.845Z] 9529.61 IOPS, 37.23 MiB/s [2024-12-09T10:58:38.845Z] 9910.32 IOPS, 38.71 MiB/s [2024-12-09T10:58:38.845Z] 10222.80 IOPS, 39.93 MiB/s [2024-12-09T10:58:38.845Z] 10416.43 IOPS, 40.69 MiB/s [2024-12-09T10:58:38.845Z] 10502.59 IOPS, 41.03 MiB/s [2024-12-09T10:58:38.845Z] 10572.04 IOPS, 41.30 MiB/s [2024-12-09T10:58:38.845Z] 10745.17 IOPS, 41.97 MiB/s [2024-12-09T10:58:38.845Z] 10973.16 IOPS, 42.86 MiB/s [2024-12-09T10:58:38.845Z] 11199.58 IOPS, 43.75 MiB/s [2024-12-09T10:58:38.845Z] [2024-12-09 10:58:36.123443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:36.123499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:36.123540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:36.123550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:36.123589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:36.123598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:36.123612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.666 [2024-12-09 10:58:36.123621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:36.123634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:36.123643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:15:45.666 [2024-12-09 10:58:36.123656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.666 [2024-12-09 10:58:36.123665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.123856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.123886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.123933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.123963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.123983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.123995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.124111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.124134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.667 [2024-12-09 10:58:36.124275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.124301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.124323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:45.667 [2024-12-09 10:58:36.124337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.667 [2024-12-09 10:58:36.124346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.124369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.124393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.124418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.124441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.124464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.124487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.124518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.124542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.124556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.124566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.125442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.125465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.668 [2024-12-09 10:58:36.125488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:15:45.668 [2024-12-09 10:58:36.125571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:45.668 [2024-12-09 10:58:36.125580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:15:45.668 11254.56 IOPS, 43.96 MiB/s [2024-12-09T10:58:38.847Z] 11294.32 IOPS, 44.12 MiB/s [2024-12-09T10:58:38.847Z] Received shutdown signal, test time was about 28.341792 seconds 00:15:45.668 00:15:45.668 Latency(us) 00:15:45.668 [2024-12-09T10:58:38.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.668 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:45.668 Verification LBA range: start 0x0 length 0x4000 00:15:45.668 Nvme0n1 : 28.34 11304.29 44.16 0.00 0.00 11302.09 105.53 3018433.62 00:15:45.668 [2024-12-09T10:58:38.847Z] =================================================================================================================== 00:15:45.668 [2024-12-09T10:58:38.847Z] Total : 11304.29 44.16 0.00 0.00 11302.09 105.53 3018433.62 00:15:45.668 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.668 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:15:45.668 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:45.668 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:15:45.668 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:45.668 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:45.927 rmmod nvme_tcp 00:15:45.927 rmmod nvme_fabrics 00:15:45.927 rmmod nvme_keyring 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76421 ']' 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76421 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76421 ']' 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76421 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.927 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76421 00:15:45.928 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.928 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.928 killing process with pid 76421 00:15:45.928 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76421' 00:15:45.928 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76421 00:15:45.928 10:58:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76421 00:15:46.187 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:46.187 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:46.187 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:46.187 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:15:46.187 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:15:46.187 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:46.188 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:15:46.448 00:15:46.448 real 0m33.300s 00:15:46.448 user 1m45.042s 00:15:46.448 sys 0m9.418s 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:46.448 ************************************ 00:15:46.448 END TEST nvmf_host_multipath_status 00:15:46.448 ************************************ 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:46.448 ************************************ 00:15:46.448 START TEST nvmf_discovery_remove_ifc 00:15:46.448 ************************************ 00:15:46.448 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:15:46.709 * Looking for test storage... 00:15:46.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:46.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.709 --rc genhtml_branch_coverage=1 00:15:46.709 --rc genhtml_function_coverage=1 00:15:46.709 --rc genhtml_legend=1 00:15:46.709 --rc geninfo_all_blocks=1 00:15:46.709 --rc geninfo_unexecuted_blocks=1 00:15:46.709 00:15:46.709 ' 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:46.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.709 --rc genhtml_branch_coverage=1 00:15:46.709 --rc genhtml_function_coverage=1 00:15:46.709 --rc genhtml_legend=1 00:15:46.709 --rc geninfo_all_blocks=1 00:15:46.709 --rc geninfo_unexecuted_blocks=1 00:15:46.709 00:15:46.709 ' 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:46.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.709 --rc genhtml_branch_coverage=1 00:15:46.709 --rc genhtml_function_coverage=1 00:15:46.709 --rc genhtml_legend=1 00:15:46.709 --rc geninfo_all_blocks=1 00:15:46.709 --rc geninfo_unexecuted_blocks=1 00:15:46.709 00:15:46.709 ' 00:15:46.709 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:46.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.709 --rc genhtml_branch_coverage=1 00:15:46.709 --rc genhtml_function_coverage=1 00:15:46.709 --rc genhtml_legend=1 00:15:46.709 --rc geninfo_all_blocks=1 00:15:46.709 --rc geninfo_unexecuted_blocks=1 00:15:46.710 00:15:46.710 ' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:46.710 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:46.710 Cannot find device "nvmf_init_br" 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:46.710 Cannot find device "nvmf_init_br2" 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:46.710 Cannot find device "nvmf_tgt_br" 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.710 Cannot find device "nvmf_tgt_br2" 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:15:46.710 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:46.970 Cannot find device "nvmf_init_br" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:46.970 Cannot find device "nvmf_init_br2" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:46.970 Cannot find device "nvmf_tgt_br" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:46.970 Cannot find device "nvmf_tgt_br2" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:46.970 Cannot find device "nvmf_br" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:46.970 Cannot find device "nvmf_init_if" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:46.970 Cannot find device "nvmf_init_if2" 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:15:46.970 10:58:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:46.971 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:47.230 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:47.230 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:47.230 00:15:47.230 --- 10.0.0.3 ping statistics --- 00:15:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.230 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:47.230 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:47.230 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:15:47.230 00:15:47.230 --- 10.0.0.4 ping statistics --- 00:15:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.230 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:47.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:15:47.230 00:15:47.230 --- 10.0.0.1 ping statistics --- 00:15:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.230 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:47.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:47.230 00:15:47.230 --- 10.0.0.2 ping statistics --- 00:15:47.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.230 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77245 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77245 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77245 ']' 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.230 10:58:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:47.230 [2024-12-09 10:58:40.241319] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:15:47.230 [2024-12-09 10:58:40.241384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.230 [2024-12-09 10:58:40.393357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.489 [2024-12-09 10:58:40.442078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.489 [2024-12-09 10:58:40.442113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.489 [2024-12-09 10:58:40.442120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.489 [2024-12-09 10:58:40.442124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.489 [2024-12-09 10:58:40.442128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.489 [2024-12-09 10:58:40.442375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.489 [2024-12-09 10:58:40.481970] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.060 [2024-12-09 10:58:41.150164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.060 [2024-12-09 10:58:41.158261] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:48.060 null0 00:15:48.060 [2024-12-09 10:58:41.190097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77279 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77279 /tmp/host.sock 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77279 ']' 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.060 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.060 10:58:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:48.335 [2024-12-09 10:58:41.266726] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:15:48.335 [2024-12-09 10:58:41.266792] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77279 ] 00:15:48.335 [2024-12-09 10:58:41.400919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.335 [2024-12-09 10:58:41.452652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:49.277 [2024-12-09 10:58:42.198891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.277 10:58:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.210 [2024-12-09 10:58:43.248562] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:50.210 [2024-12-09 10:58:43.248585] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:50.210 [2024-12-09 10:58:43.248599] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:50.210 [2024-12-09 10:58:43.254582] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:50.210 [2024-12-09 10:58:43.308800] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:50.210 [2024-12-09 10:58:43.309559] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1087f00:1 started. 00:15:50.210 [2024-12-09 10:58:43.310973] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:50.210 [2024-12-09 10:58:43.311035] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:50.210 [2024-12-09 10:58:43.311053] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:50.210 [2024-12-09 10:58:43.311066] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:50.210 [2024-12-09 10:58:43.311086] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.210 [2024-12-09 10:58:43.317299] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1087f00 was disconnected and freed. delete nvme_qpair. 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.210 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:50.468 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.468 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:50.468 10:58:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:51.402 10:58:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:52.336 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:52.593 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.594 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:52.594 10:58:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:53.527 10:58:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:54.461 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:54.718 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.718 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:54.718 10:58:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:55.652 10:58:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:55.652 [2024-12-09 10:58:48.728708] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:15:55.652 [2024-12-09 10:58:48.728761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.652 [2024-12-09 10:58:48.728771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.652 [2024-12-09 10:58:48.728781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.652 [2024-12-09 10:58:48.728786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.652 [2024-12-09 10:58:48.728792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.652 [2024-12-09 10:58:48.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.652 [2024-12-09 10:58:48.728803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.652 [2024-12-09 10:58:48.728808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.652 [2024-12-09 10:58:48.728830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:55.652 [2024-12-09 10:58:48.728838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:55.652 [2024-12-09 10:58:48.728845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063fc0 is same with the state(6) to be set 00:15:55.652 [2024-12-09 10:58:48.738685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1063fc0 (9): Bad file descriptor 00:15:55.652 [2024-12-09 10:58:48.748678] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:15:55.652 [2024-12-09 10:58:48.748692] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:15:55.652 [2024-12-09 10:58:48.748695] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:55.652 [2024-12-09 10:58:48.748699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:55.652 [2024-12-09 10:58:48.748724] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:56.587 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:56.587 [2024-12-09 10:58:49.754932] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:15:56.587 [2024-12-09 10:58:49.755079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1063fc0 with addr=10.0.0.3, port=4420 00:15:56.587 [2024-12-09 10:58:49.755119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1063fc0 is same with the state(6) to be set 00:15:56.587 [2024-12-09 10:58:49.755194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1063fc0 (9): Bad file descriptor 00:15:56.587 [2024-12-09 10:58:49.756456] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:15:56.587 [2024-12-09 10:58:49.756564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:56.587 [2024-12-09 10:58:49.756589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:56.587 [2024-12-09 10:58:49.756613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:56.587 [2024-12-09 10:58:49.756633] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:56.587 [2024-12-09 10:58:49.756650] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:56.587 [2024-12-09 10:58:49.756661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:56.587 [2024-12-09 10:58:49.756684] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:15:56.587 [2024-12-09 10:58:49.756697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:15:56.845 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.845 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:15:56.845 10:58:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:57.780 [2024-12-09 10:58:50.754886] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:15:57.780 [2024-12-09 10:58:50.754916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:15:57.780 [2024-12-09 10:58:50.754935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:15:57.780 [2024-12-09 10:58:50.754942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:15:57.780 [2024-12-09 10:58:50.754949] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:15:57.780 [2024-12-09 10:58:50.754955] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:15:57.780 [2024-12-09 10:58:50.754959] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:15:57.780 [2024-12-09 10:58:50.754963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:15:57.780 [2024-12-09 10:58:50.754992] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:15:57.780 [2024-12-09 10:58:50.755026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.780 [2024-12-09 10:58:50.755036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.780 [2024-12-09 10:58:50.755045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.780 [2024-12-09 10:58:50.755050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.780 [2024-12-09 10:58:50.755056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.780 [2024-12-09 10:58:50.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.780 [2024-12-09 10:58:50.755068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.780 [2024-12-09 10:58:50.755073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.780 [2024-12-09 10:58:50.755079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.780 [2024-12-09 10:58:50.755084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.780 [2024-12-09 10:58:50.755090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:15:57.780 [2024-12-09 10:58:50.755356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfefa20 (9): Bad file descriptor 00:15:57.780 [2024-12-09 10:58:50.756363] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:15:57.780 [2024-12-09 10:58:50.756378] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:57.780 10:58:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:59.154 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:59.154 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.154 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:59.154 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.155 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:59.155 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:59.155 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:59.155 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.155 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:15:59.155 10:58:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:15:59.721 [2024-12-09 10:58:52.763137] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:59.721 [2024-12-09 10:58:52.763157] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:59.721 [2024-12-09 10:58:52.763169] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:59.721 [2024-12-09 10:58:52.769153] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:15:59.721 [2024-12-09 10:58:52.823292] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:15:59.721 [2024-12-09 10:58:52.823949] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x10901d0:1 started. 00:15:59.721 [2024-12-09 10:58:52.824958] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:15:59.721 [2024-12-09 10:58:52.824996] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:15:59.721 [2024-12-09 10:58:52.825013] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:15:59.721 [2024-12-09 10:58:52.825027] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:15:59.721 [2024-12-09 10:58:52.825033] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:59.721 [2024-12-09 10:58:52.831786] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x10901d0 was disconnected and freed. delete nvme_qpair. 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:15:59.979 10:58:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77279 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77279 ']' 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77279 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77279 00:15:59.979 killing process with pid 77279 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77279' 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77279 00:15:59.979 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77279 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.238 rmmod nvme_tcp 00:16:00.238 rmmod nvme_fabrics 00:16:00.238 rmmod nvme_keyring 00:16:00.238 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77245 ']' 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77245 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77245 ']' 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77245 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77245 00:16:00.497 killing process with pid 77245 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77245' 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77245 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77245 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:00.497 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.756 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.016 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:01.016 00:16:01.016 real 0m14.411s 00:16:01.016 user 0m24.441s 00:16:01.016 sys 0m2.553s 00:16:01.016 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.016 10:58:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:01.016 ************************************ 00:16:01.016 END TEST nvmf_discovery_remove_ifc 00:16:01.016 ************************************ 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:01.016 ************************************ 00:16:01.016 START TEST nvmf_identify_kernel_target 00:16:01.016 ************************************ 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:01.016 * Looking for test storage... 00:16:01.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:01.016 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:01.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.277 --rc genhtml_branch_coverage=1 00:16:01.277 --rc genhtml_function_coverage=1 00:16:01.277 --rc genhtml_legend=1 00:16:01.277 --rc geninfo_all_blocks=1 00:16:01.277 --rc geninfo_unexecuted_blocks=1 00:16:01.277 00:16:01.277 ' 00:16:01.277 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:01.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.277 --rc genhtml_branch_coverage=1 00:16:01.277 --rc genhtml_function_coverage=1 00:16:01.277 --rc genhtml_legend=1 00:16:01.277 --rc geninfo_all_blocks=1 00:16:01.277 --rc geninfo_unexecuted_blocks=1 00:16:01.277 00:16:01.277 ' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:01.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.278 --rc genhtml_branch_coverage=1 00:16:01.278 --rc genhtml_function_coverage=1 00:16:01.278 --rc genhtml_legend=1 00:16:01.278 --rc geninfo_all_blocks=1 00:16:01.278 --rc geninfo_unexecuted_blocks=1 00:16:01.278 00:16:01.278 ' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:01.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:01.278 --rc genhtml_branch_coverage=1 00:16:01.278 --rc genhtml_function_coverage=1 00:16:01.278 --rc genhtml_legend=1 00:16:01.278 --rc geninfo_all_blocks=1 00:16:01.278 --rc geninfo_unexecuted_blocks=1 00:16:01.278 00:16:01.278 ' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:01.278 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:01.278 Cannot find device "nvmf_init_br" 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:01.278 Cannot find device "nvmf_init_br2" 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:01.278 Cannot find device "nvmf_tgt_br" 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.278 Cannot find device "nvmf_tgt_br2" 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:01.278 Cannot find device "nvmf_init_br" 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:01.278 Cannot find device "nvmf_init_br2" 00:16:01.278 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:01.279 Cannot find device "nvmf_tgt_br" 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:01.279 Cannot find device "nvmf_tgt_br2" 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:01.279 Cannot find device "nvmf_br" 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:01.279 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:01.539 Cannot find device "nvmf_init_if" 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:01.539 Cannot find device "nvmf_init_if2" 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:01.539 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:01.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:16:01.800 00:16:01.800 --- 10.0.0.3 ping statistics --- 00:16:01.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.800 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:01.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:01.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.074 ms 00:16:01.800 00:16:01.800 --- 10.0.0.4 ping statistics --- 00:16:01.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.800 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:01.800 00:16:01.800 --- 10.0.0.1 ping statistics --- 00:16:01.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.800 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:01.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:16:01.800 00:16:01.800 --- 10.0.0.2 ping statistics --- 00:16:01.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.800 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:01.800 10:58:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:02.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.368 Waiting for block devices as requested 00:16:02.368 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.368 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:02.627 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:02.627 No valid GPT data, bailing 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:02.628 No valid GPT data, bailing 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:02.628 No valid GPT data, bailing 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:02.628 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:02.888 No valid GPT data, bailing 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -a 10.0.0.1 -t tcp -s 4420 00:16:02.888 00:16:02.888 Discovery Log Number of Records 2, Generation counter 2 00:16:02.888 =====Discovery Log Entry 0====== 00:16:02.888 trtype: tcp 00:16:02.888 adrfam: ipv4 00:16:02.888 subtype: current discovery subsystem 00:16:02.888 treq: not specified, sq flow control disable supported 00:16:02.888 portid: 1 00:16:02.888 trsvcid: 4420 00:16:02.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:02.888 traddr: 10.0.0.1 00:16:02.888 eflags: none 00:16:02.888 sectype: none 00:16:02.888 =====Discovery Log Entry 1====== 00:16:02.888 trtype: tcp 00:16:02.888 adrfam: ipv4 00:16:02.888 subtype: nvme subsystem 00:16:02.888 treq: not specified, sq flow control disable supported 00:16:02.888 portid: 1 00:16:02.888 trsvcid: 4420 00:16:02.888 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:02.888 traddr: 10.0.0.1 00:16:02.888 eflags: none 00:16:02.888 sectype: none 00:16:02.888 10:58:55 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:02.888 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:03.148 ===================================================== 00:16:03.148 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:03.148 ===================================================== 00:16:03.148 Controller Capabilities/Features 00:16:03.148 ================================ 00:16:03.148 Vendor ID: 0000 00:16:03.148 Subsystem Vendor ID: 0000 00:16:03.148 Serial Number: 9cac9563a1476ed8652d 00:16:03.148 Model Number: Linux 00:16:03.148 Firmware Version: 6.8.9-20 00:16:03.148 Recommended Arb Burst: 0 00:16:03.148 IEEE OUI Identifier: 00 00 00 00:16:03.148 Multi-path I/O 00:16:03.148 May have multiple subsystem ports: No 00:16:03.148 May have multiple controllers: No 00:16:03.148 Associated with SR-IOV VF: No 00:16:03.148 Max Data Transfer Size: Unlimited 00:16:03.148 Max Number of Namespaces: 0 00:16:03.148 Max Number of I/O Queues: 1024 00:16:03.148 NVMe Specification Version (VS): 1.3 00:16:03.148 NVMe Specification Version (Identify): 1.3 00:16:03.148 Maximum Queue Entries: 1024 00:16:03.148 Contiguous Queues Required: No 00:16:03.148 Arbitration Mechanisms Supported 00:16:03.148 Weighted Round Robin: Not Supported 00:16:03.148 Vendor Specific: Not Supported 00:16:03.148 Reset Timeout: 7500 ms 00:16:03.148 Doorbell Stride: 4 bytes 00:16:03.148 NVM Subsystem Reset: Not Supported 00:16:03.148 Command Sets Supported 00:16:03.148 NVM Command Set: Supported 00:16:03.148 Boot Partition: Not Supported 00:16:03.148 Memory Page Size Minimum: 4096 bytes 00:16:03.148 Memory Page Size Maximum: 4096 bytes 00:16:03.148 Persistent Memory Region: Not Supported 00:16:03.148 Optional Asynchronous Events Supported 00:16:03.148 Namespace Attribute Notices: Not Supported 00:16:03.148 Firmware Activation Notices: Not Supported 00:16:03.148 ANA Change Notices: Not Supported 00:16:03.148 PLE Aggregate Log Change Notices: Not Supported 00:16:03.148 LBA Status Info Alert Notices: Not Supported 00:16:03.148 EGE Aggregate Log Change Notices: Not Supported 00:16:03.148 Normal NVM Subsystem Shutdown event: Not Supported 00:16:03.148 Zone Descriptor Change Notices: Not Supported 00:16:03.148 Discovery Log Change Notices: Supported 00:16:03.148 Controller Attributes 00:16:03.148 128-bit Host Identifier: Not Supported 00:16:03.148 Non-Operational Permissive Mode: Not Supported 00:16:03.148 NVM Sets: Not Supported 00:16:03.148 Read Recovery Levels: Not Supported 00:16:03.148 Endurance Groups: Not Supported 00:16:03.149 Predictable Latency Mode: Not Supported 00:16:03.149 Traffic Based Keep ALive: Not Supported 00:16:03.149 Namespace Granularity: Not Supported 00:16:03.149 SQ Associations: Not Supported 00:16:03.149 UUID List: Not Supported 00:16:03.149 Multi-Domain Subsystem: Not Supported 00:16:03.149 Fixed Capacity Management: Not Supported 00:16:03.149 Variable Capacity Management: Not Supported 00:16:03.149 Delete Endurance Group: Not Supported 00:16:03.149 Delete NVM Set: Not Supported 00:16:03.149 Extended LBA Formats Supported: Not Supported 00:16:03.149 Flexible Data Placement Supported: Not Supported 00:16:03.149 00:16:03.149 Controller Memory Buffer Support 00:16:03.149 ================================ 00:16:03.149 Supported: No 00:16:03.149 00:16:03.149 Persistent Memory Region Support 00:16:03.149 ================================ 00:16:03.149 Supported: No 00:16:03.149 00:16:03.149 Admin Command Set Attributes 00:16:03.149 ============================ 00:16:03.149 Security Send/Receive: Not Supported 00:16:03.149 Format NVM: Not Supported 00:16:03.149 Firmware Activate/Download: Not Supported 00:16:03.149 Namespace Management: Not Supported 00:16:03.149 Device Self-Test: Not Supported 00:16:03.149 Directives: Not Supported 00:16:03.149 NVMe-MI: Not Supported 00:16:03.149 Virtualization Management: Not Supported 00:16:03.149 Doorbell Buffer Config: Not Supported 00:16:03.149 Get LBA Status Capability: Not Supported 00:16:03.149 Command & Feature Lockdown Capability: Not Supported 00:16:03.149 Abort Command Limit: 1 00:16:03.149 Async Event Request Limit: 1 00:16:03.149 Number of Firmware Slots: N/A 00:16:03.149 Firmware Slot 1 Read-Only: N/A 00:16:03.149 Firmware Activation Without Reset: N/A 00:16:03.149 Multiple Update Detection Support: N/A 00:16:03.149 Firmware Update Granularity: No Information Provided 00:16:03.149 Per-Namespace SMART Log: No 00:16:03.149 Asymmetric Namespace Access Log Page: Not Supported 00:16:03.149 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:03.149 Command Effects Log Page: Not Supported 00:16:03.149 Get Log Page Extended Data: Supported 00:16:03.149 Telemetry Log Pages: Not Supported 00:16:03.149 Persistent Event Log Pages: Not Supported 00:16:03.149 Supported Log Pages Log Page: May Support 00:16:03.149 Commands Supported & Effects Log Page: Not Supported 00:16:03.149 Feature Identifiers & Effects Log Page:May Support 00:16:03.149 NVMe-MI Commands & Effects Log Page: May Support 00:16:03.149 Data Area 4 for Telemetry Log: Not Supported 00:16:03.149 Error Log Page Entries Supported: 1 00:16:03.149 Keep Alive: Not Supported 00:16:03.149 00:16:03.149 NVM Command Set Attributes 00:16:03.149 ========================== 00:16:03.149 Submission Queue Entry Size 00:16:03.149 Max: 1 00:16:03.149 Min: 1 00:16:03.149 Completion Queue Entry Size 00:16:03.149 Max: 1 00:16:03.149 Min: 1 00:16:03.149 Number of Namespaces: 0 00:16:03.149 Compare Command: Not Supported 00:16:03.149 Write Uncorrectable Command: Not Supported 00:16:03.149 Dataset Management Command: Not Supported 00:16:03.149 Write Zeroes Command: Not Supported 00:16:03.149 Set Features Save Field: Not Supported 00:16:03.149 Reservations: Not Supported 00:16:03.149 Timestamp: Not Supported 00:16:03.149 Copy: Not Supported 00:16:03.149 Volatile Write Cache: Not Present 00:16:03.149 Atomic Write Unit (Normal): 1 00:16:03.149 Atomic Write Unit (PFail): 1 00:16:03.149 Atomic Compare & Write Unit: 1 00:16:03.149 Fused Compare & Write: Not Supported 00:16:03.149 Scatter-Gather List 00:16:03.149 SGL Command Set: Supported 00:16:03.149 SGL Keyed: Not Supported 00:16:03.149 SGL Bit Bucket Descriptor: Not Supported 00:16:03.149 SGL Metadata Pointer: Not Supported 00:16:03.149 Oversized SGL: Not Supported 00:16:03.149 SGL Metadata Address: Not Supported 00:16:03.149 SGL Offset: Supported 00:16:03.149 Transport SGL Data Block: Not Supported 00:16:03.149 Replay Protected Memory Block: Not Supported 00:16:03.149 00:16:03.149 Firmware Slot Information 00:16:03.149 ========================= 00:16:03.149 Active slot: 0 00:16:03.149 00:16:03.149 00:16:03.149 Error Log 00:16:03.149 ========= 00:16:03.149 00:16:03.149 Active Namespaces 00:16:03.149 ================= 00:16:03.149 Discovery Log Page 00:16:03.149 ================== 00:16:03.149 Generation Counter: 2 00:16:03.149 Number of Records: 2 00:16:03.149 Record Format: 0 00:16:03.149 00:16:03.149 Discovery Log Entry 0 00:16:03.149 ---------------------- 00:16:03.149 Transport Type: 3 (TCP) 00:16:03.149 Address Family: 1 (IPv4) 00:16:03.149 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:03.149 Entry Flags: 00:16:03.149 Duplicate Returned Information: 0 00:16:03.149 Explicit Persistent Connection Support for Discovery: 0 00:16:03.149 Transport Requirements: 00:16:03.149 Secure Channel: Not Specified 00:16:03.149 Port ID: 1 (0x0001) 00:16:03.149 Controller ID: 65535 (0xffff) 00:16:03.149 Admin Max SQ Size: 32 00:16:03.149 Transport Service Identifier: 4420 00:16:03.149 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:03.149 Transport Address: 10.0.0.1 00:16:03.149 Discovery Log Entry 1 00:16:03.149 ---------------------- 00:16:03.149 Transport Type: 3 (TCP) 00:16:03.149 Address Family: 1 (IPv4) 00:16:03.149 Subsystem Type: 2 (NVM Subsystem) 00:16:03.149 Entry Flags: 00:16:03.149 Duplicate Returned Information: 0 00:16:03.149 Explicit Persistent Connection Support for Discovery: 0 00:16:03.149 Transport Requirements: 00:16:03.149 Secure Channel: Not Specified 00:16:03.149 Port ID: 1 (0x0001) 00:16:03.149 Controller ID: 65535 (0xffff) 00:16:03.149 Admin Max SQ Size: 32 00:16:03.149 Transport Service Identifier: 4420 00:16:03.149 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:03.149 Transport Address: 10.0.0.1 00:16:03.149 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:03.149 get_feature(0x01) failed 00:16:03.149 get_feature(0x02) failed 00:16:03.149 get_feature(0x04) failed 00:16:03.149 ===================================================== 00:16:03.149 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:03.149 ===================================================== 00:16:03.149 Controller Capabilities/Features 00:16:03.149 ================================ 00:16:03.149 Vendor ID: 0000 00:16:03.149 Subsystem Vendor ID: 0000 00:16:03.149 Serial Number: 371ac25ec8d79827549a 00:16:03.149 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:03.149 Firmware Version: 6.8.9-20 00:16:03.149 Recommended Arb Burst: 6 00:16:03.149 IEEE OUI Identifier: 00 00 00 00:16:03.149 Multi-path I/O 00:16:03.149 May have multiple subsystem ports: Yes 00:16:03.149 May have multiple controllers: Yes 00:16:03.149 Associated with SR-IOV VF: No 00:16:03.149 Max Data Transfer Size: Unlimited 00:16:03.149 Max Number of Namespaces: 1024 00:16:03.149 Max Number of I/O Queues: 128 00:16:03.149 NVMe Specification Version (VS): 1.3 00:16:03.149 NVMe Specification Version (Identify): 1.3 00:16:03.149 Maximum Queue Entries: 1024 00:16:03.149 Contiguous Queues Required: No 00:16:03.149 Arbitration Mechanisms Supported 00:16:03.149 Weighted Round Robin: Not Supported 00:16:03.149 Vendor Specific: Not Supported 00:16:03.149 Reset Timeout: 7500 ms 00:16:03.149 Doorbell Stride: 4 bytes 00:16:03.149 NVM Subsystem Reset: Not Supported 00:16:03.149 Command Sets Supported 00:16:03.149 NVM Command Set: Supported 00:16:03.149 Boot Partition: Not Supported 00:16:03.149 Memory Page Size Minimum: 4096 bytes 00:16:03.149 Memory Page Size Maximum: 4096 bytes 00:16:03.149 Persistent Memory Region: Not Supported 00:16:03.149 Optional Asynchronous Events Supported 00:16:03.149 Namespace Attribute Notices: Supported 00:16:03.149 Firmware Activation Notices: Not Supported 00:16:03.149 ANA Change Notices: Supported 00:16:03.149 PLE Aggregate Log Change Notices: Not Supported 00:16:03.149 LBA Status Info Alert Notices: Not Supported 00:16:03.149 EGE Aggregate Log Change Notices: Not Supported 00:16:03.149 Normal NVM Subsystem Shutdown event: Not Supported 00:16:03.149 Zone Descriptor Change Notices: Not Supported 00:16:03.149 Discovery Log Change Notices: Not Supported 00:16:03.149 Controller Attributes 00:16:03.149 128-bit Host Identifier: Supported 00:16:03.149 Non-Operational Permissive Mode: Not Supported 00:16:03.149 NVM Sets: Not Supported 00:16:03.149 Read Recovery Levels: Not Supported 00:16:03.149 Endurance Groups: Not Supported 00:16:03.149 Predictable Latency Mode: Not Supported 00:16:03.149 Traffic Based Keep ALive: Supported 00:16:03.149 Namespace Granularity: Not Supported 00:16:03.149 SQ Associations: Not Supported 00:16:03.149 UUID List: Not Supported 00:16:03.149 Multi-Domain Subsystem: Not Supported 00:16:03.149 Fixed Capacity Management: Not Supported 00:16:03.149 Variable Capacity Management: Not Supported 00:16:03.149 Delete Endurance Group: Not Supported 00:16:03.149 Delete NVM Set: Not Supported 00:16:03.149 Extended LBA Formats Supported: Not Supported 00:16:03.150 Flexible Data Placement Supported: Not Supported 00:16:03.150 00:16:03.150 Controller Memory Buffer Support 00:16:03.150 ================================ 00:16:03.150 Supported: No 00:16:03.150 00:16:03.150 Persistent Memory Region Support 00:16:03.150 ================================ 00:16:03.150 Supported: No 00:16:03.150 00:16:03.150 Admin Command Set Attributes 00:16:03.150 ============================ 00:16:03.150 Security Send/Receive: Not Supported 00:16:03.150 Format NVM: Not Supported 00:16:03.150 Firmware Activate/Download: Not Supported 00:16:03.150 Namespace Management: Not Supported 00:16:03.150 Device Self-Test: Not Supported 00:16:03.150 Directives: Not Supported 00:16:03.150 NVMe-MI: Not Supported 00:16:03.150 Virtualization Management: Not Supported 00:16:03.150 Doorbell Buffer Config: Not Supported 00:16:03.150 Get LBA Status Capability: Not Supported 00:16:03.150 Command & Feature Lockdown Capability: Not Supported 00:16:03.150 Abort Command Limit: 4 00:16:03.150 Async Event Request Limit: 4 00:16:03.150 Number of Firmware Slots: N/A 00:16:03.150 Firmware Slot 1 Read-Only: N/A 00:16:03.150 Firmware Activation Without Reset: N/A 00:16:03.150 Multiple Update Detection Support: N/A 00:16:03.150 Firmware Update Granularity: No Information Provided 00:16:03.150 Per-Namespace SMART Log: Yes 00:16:03.150 Asymmetric Namespace Access Log Page: Supported 00:16:03.150 ANA Transition Time : 10 sec 00:16:03.150 00:16:03.150 Asymmetric Namespace Access Capabilities 00:16:03.150 ANA Optimized State : Supported 00:16:03.150 ANA Non-Optimized State : Supported 00:16:03.150 ANA Inaccessible State : Supported 00:16:03.150 ANA Persistent Loss State : Supported 00:16:03.150 ANA Change State : Supported 00:16:03.150 ANAGRPID is not changed : No 00:16:03.150 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:03.150 00:16:03.150 ANA Group Identifier Maximum : 128 00:16:03.150 Number of ANA Group Identifiers : 128 00:16:03.150 Max Number of Allowed Namespaces : 1024 00:16:03.150 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:03.150 Command Effects Log Page: Supported 00:16:03.150 Get Log Page Extended Data: Supported 00:16:03.150 Telemetry Log Pages: Not Supported 00:16:03.150 Persistent Event Log Pages: Not Supported 00:16:03.150 Supported Log Pages Log Page: May Support 00:16:03.150 Commands Supported & Effects Log Page: Not Supported 00:16:03.150 Feature Identifiers & Effects Log Page:May Support 00:16:03.150 NVMe-MI Commands & Effects Log Page: May Support 00:16:03.150 Data Area 4 for Telemetry Log: Not Supported 00:16:03.150 Error Log Page Entries Supported: 128 00:16:03.150 Keep Alive: Supported 00:16:03.150 Keep Alive Granularity: 1000 ms 00:16:03.150 00:16:03.150 NVM Command Set Attributes 00:16:03.150 ========================== 00:16:03.150 Submission Queue Entry Size 00:16:03.150 Max: 64 00:16:03.150 Min: 64 00:16:03.150 Completion Queue Entry Size 00:16:03.150 Max: 16 00:16:03.150 Min: 16 00:16:03.150 Number of Namespaces: 1024 00:16:03.150 Compare Command: Not Supported 00:16:03.150 Write Uncorrectable Command: Not Supported 00:16:03.150 Dataset Management Command: Supported 00:16:03.150 Write Zeroes Command: Supported 00:16:03.150 Set Features Save Field: Not Supported 00:16:03.150 Reservations: Not Supported 00:16:03.150 Timestamp: Not Supported 00:16:03.150 Copy: Not Supported 00:16:03.150 Volatile Write Cache: Present 00:16:03.150 Atomic Write Unit (Normal): 1 00:16:03.150 Atomic Write Unit (PFail): 1 00:16:03.150 Atomic Compare & Write Unit: 1 00:16:03.150 Fused Compare & Write: Not Supported 00:16:03.150 Scatter-Gather List 00:16:03.150 SGL Command Set: Supported 00:16:03.150 SGL Keyed: Not Supported 00:16:03.150 SGL Bit Bucket Descriptor: Not Supported 00:16:03.150 SGL Metadata Pointer: Not Supported 00:16:03.150 Oversized SGL: Not Supported 00:16:03.150 SGL Metadata Address: Not Supported 00:16:03.150 SGL Offset: Supported 00:16:03.150 Transport SGL Data Block: Not Supported 00:16:03.150 Replay Protected Memory Block: Not Supported 00:16:03.150 00:16:03.150 Firmware Slot Information 00:16:03.150 ========================= 00:16:03.150 Active slot: 0 00:16:03.150 00:16:03.150 Asymmetric Namespace Access 00:16:03.150 =========================== 00:16:03.150 Change Count : 0 00:16:03.150 Number of ANA Group Descriptors : 1 00:16:03.150 ANA Group Descriptor : 0 00:16:03.150 ANA Group ID : 1 00:16:03.150 Number of NSID Values : 1 00:16:03.150 Change Count : 0 00:16:03.150 ANA State : 1 00:16:03.150 Namespace Identifier : 1 00:16:03.150 00:16:03.150 Commands Supported and Effects 00:16:03.150 ============================== 00:16:03.150 Admin Commands 00:16:03.150 -------------- 00:16:03.150 Get Log Page (02h): Supported 00:16:03.150 Identify (06h): Supported 00:16:03.150 Abort (08h): Supported 00:16:03.150 Set Features (09h): Supported 00:16:03.150 Get Features (0Ah): Supported 00:16:03.150 Asynchronous Event Request (0Ch): Supported 00:16:03.150 Keep Alive (18h): Supported 00:16:03.150 I/O Commands 00:16:03.150 ------------ 00:16:03.150 Flush (00h): Supported 00:16:03.150 Write (01h): Supported LBA-Change 00:16:03.150 Read (02h): Supported 00:16:03.150 Write Zeroes (08h): Supported LBA-Change 00:16:03.150 Dataset Management (09h): Supported 00:16:03.150 00:16:03.150 Error Log 00:16:03.150 ========= 00:16:03.150 Entry: 0 00:16:03.150 Error Count: 0x3 00:16:03.150 Submission Queue Id: 0x0 00:16:03.150 Command Id: 0x5 00:16:03.150 Phase Bit: 0 00:16:03.150 Status Code: 0x2 00:16:03.150 Status Code Type: 0x0 00:16:03.150 Do Not Retry: 1 00:16:03.410 Error Location: 0x28 00:16:03.410 LBA: 0x0 00:16:03.410 Namespace: 0x0 00:16:03.410 Vendor Log Page: 0x0 00:16:03.410 ----------- 00:16:03.410 Entry: 1 00:16:03.410 Error Count: 0x2 00:16:03.410 Submission Queue Id: 0x0 00:16:03.410 Command Id: 0x5 00:16:03.410 Phase Bit: 0 00:16:03.410 Status Code: 0x2 00:16:03.410 Status Code Type: 0x0 00:16:03.410 Do Not Retry: 1 00:16:03.410 Error Location: 0x28 00:16:03.410 LBA: 0x0 00:16:03.410 Namespace: 0x0 00:16:03.410 Vendor Log Page: 0x0 00:16:03.410 ----------- 00:16:03.410 Entry: 2 00:16:03.410 Error Count: 0x1 00:16:03.410 Submission Queue Id: 0x0 00:16:03.410 Command Id: 0x4 00:16:03.410 Phase Bit: 0 00:16:03.410 Status Code: 0x2 00:16:03.410 Status Code Type: 0x0 00:16:03.410 Do Not Retry: 1 00:16:03.410 Error Location: 0x28 00:16:03.410 LBA: 0x0 00:16:03.410 Namespace: 0x0 00:16:03.410 Vendor Log Page: 0x0 00:16:03.410 00:16:03.410 Number of Queues 00:16:03.410 ================ 00:16:03.410 Number of I/O Submission Queues: 128 00:16:03.410 Number of I/O Completion Queues: 128 00:16:03.410 00:16:03.410 ZNS Specific Controller Data 00:16:03.410 ============================ 00:16:03.410 Zone Append Size Limit: 0 00:16:03.410 00:16:03.410 00:16:03.410 Active Namespaces 00:16:03.410 ================= 00:16:03.410 get_feature(0x05) failed 00:16:03.410 Namespace ID:1 00:16:03.410 Command Set Identifier: NVM (00h) 00:16:03.410 Deallocate: Supported 00:16:03.410 Deallocated/Unwritten Error: Not Supported 00:16:03.410 Deallocated Read Value: Unknown 00:16:03.410 Deallocate in Write Zeroes: Not Supported 00:16:03.410 Deallocated Guard Field: 0xFFFF 00:16:03.410 Flush: Supported 00:16:03.410 Reservation: Not Supported 00:16:03.410 Namespace Sharing Capabilities: Multiple Controllers 00:16:03.410 Size (in LBAs): 1310720 (5GiB) 00:16:03.410 Capacity (in LBAs): 1310720 (5GiB) 00:16:03.410 Utilization (in LBAs): 1310720 (5GiB) 00:16:03.410 UUID: 1b146252-0f13-483e-ad1b-2f9786ea1d69 00:16:03.410 Thin Provisioning: Not Supported 00:16:03.410 Per-NS Atomic Units: Yes 00:16:03.411 Atomic Boundary Size (Normal): 0 00:16:03.411 Atomic Boundary Size (PFail): 0 00:16:03.411 Atomic Boundary Offset: 0 00:16:03.411 NGUID/EUI64 Never Reused: No 00:16:03.411 ANA group ID: 1 00:16:03.411 Namespace Write Protected: No 00:16:03.411 Number of LBA Formats: 1 00:16:03.411 Current LBA Format: LBA Format #00 00:16:03.411 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:03.411 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.411 rmmod nvme_tcp 00:16:03.411 rmmod nvme_fabrics 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:03.411 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:03.671 10:58:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:04.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:04.614 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.878 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:04.878 00:16:04.878 real 0m3.831s 00:16:04.878 user 0m1.368s 00:16:04.878 sys 0m1.866s 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.878 ************************************ 00:16:04.878 END TEST nvmf_identify_kernel_target 00:16:04.878 ************************************ 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:04.878 ************************************ 00:16:04.878 START TEST nvmf_auth_host 00:16:04.878 ************************************ 00:16:04.878 10:58:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:04.878 * Looking for test storage... 00:16:05.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:05.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.139 --rc genhtml_branch_coverage=1 00:16:05.139 --rc genhtml_function_coverage=1 00:16:05.139 --rc genhtml_legend=1 00:16:05.139 --rc geninfo_all_blocks=1 00:16:05.139 --rc geninfo_unexecuted_blocks=1 00:16:05.139 00:16:05.139 ' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:05.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.139 --rc genhtml_branch_coverage=1 00:16:05.139 --rc genhtml_function_coverage=1 00:16:05.139 --rc genhtml_legend=1 00:16:05.139 --rc geninfo_all_blocks=1 00:16:05.139 --rc geninfo_unexecuted_blocks=1 00:16:05.139 00:16:05.139 ' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:05.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.139 --rc genhtml_branch_coverage=1 00:16:05.139 --rc genhtml_function_coverage=1 00:16:05.139 --rc genhtml_legend=1 00:16:05.139 --rc geninfo_all_blocks=1 00:16:05.139 --rc geninfo_unexecuted_blocks=1 00:16:05.139 00:16:05.139 ' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:05.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.139 --rc genhtml_branch_coverage=1 00:16:05.139 --rc genhtml_function_coverage=1 00:16:05.139 --rc genhtml_legend=1 00:16:05.139 --rc geninfo_all_blocks=1 00:16:05.139 --rc geninfo_unexecuted_blocks=1 00:16:05.139 00:16:05.139 ' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.139 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:05.140 Cannot find device "nvmf_init_br" 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:05.140 Cannot find device "nvmf_init_br2" 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:05.140 Cannot find device "nvmf_tgt_br" 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.140 Cannot find device "nvmf_tgt_br2" 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:05.140 Cannot find device "nvmf_init_br" 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:05.140 Cannot find device "nvmf_init_br2" 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:05.140 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:05.400 Cannot find device "nvmf_tgt_br" 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:05.400 Cannot find device "nvmf_tgt_br2" 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:05.400 Cannot find device "nvmf_br" 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:05.400 Cannot find device "nvmf_init_if" 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:05.400 Cannot find device "nvmf_init_if2" 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.400 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.400 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:05.401 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:05.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:05.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:05.661 00:16:05.661 --- 10.0.0.3 ping statistics --- 00:16:05.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.661 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:05.661 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:05.661 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:16:05.661 00:16:05.661 --- 10.0.0.4 ping statistics --- 00:16:05.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.661 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:05.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:16:05.661 00:16:05.661 --- 10.0.0.1 ping statistics --- 00:16:05.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.661 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:05.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:16:05.661 00:16:05.661 --- 10.0.0.2 ping statistics --- 00:16:05.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.661 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78282 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78282 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78282 ']' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.661 10:58:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=65c00bfac89e7765dfdcd95dd2425249 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ZOn 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 65c00bfac89e7765dfdcd95dd2425249 0 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 65c00bfac89e7765dfdcd95dd2425249 0 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=65c00bfac89e7765dfdcd95dd2425249 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ZOn 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ZOn 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZOn 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:06.599 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0f90e135061346fde921bfe41257874ca2de065c40a44c4426924f96c547c7fc 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sBU 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0f90e135061346fde921bfe41257874ca2de065c40a44c4426924f96c547c7fc 3 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0f90e135061346fde921bfe41257874ca2de065c40a44c4426924f96c547c7fc 3 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0f90e135061346fde921bfe41257874ca2de065c40a44c4426924f96c547c7fc 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sBU 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sBU 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sBU 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=64737b63aed18fda2ca40b109dadbdc21f5f868943fda737 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ixq 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 64737b63aed18fda2ca40b109dadbdc21f5f868943fda737 0 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 64737b63aed18fda2ca40b109dadbdc21f5f868943fda737 0 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=64737b63aed18fda2ca40b109dadbdc21f5f868943fda737 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:06.600 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ixq 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ixq 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ixq 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a965b2b60bec517b8e5c1834de1aa32e4f26f3eca1e8e31 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6st 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a965b2b60bec517b8e5c1834de1aa32e4f26f3eca1e8e31 2 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a965b2b60bec517b8e5c1834de1aa32e4f26f3eca1e8e31 2 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a965b2b60bec517b8e5c1834de1aa32e4f26f3eca1e8e31 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6st 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6st 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6st 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a65403b4fb210debdf0d04f090a2faa5 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.evd 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a65403b4fb210debdf0d04f090a2faa5 1 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a65403b4fb210debdf0d04f090a2faa5 1 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a65403b4fb210debdf0d04f090a2faa5 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.evd 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.evd 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.evd 00:16:06.859 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=066577c47756d59f01908b1f8547c0dc 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ajM 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 066577c47756d59f01908b1f8547c0dc 1 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 066577c47756d59f01908b1f8547c0dc 1 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=066577c47756d59f01908b1f8547c0dc 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ajM 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ajM 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ajM 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:06.860 10:58:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1322442309d1fffd134f39f7a44af95fc81db900ab9b8933 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.85N 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1322442309d1fffd134f39f7a44af95fc81db900ab9b8933 2 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1322442309d1fffd134f39f7a44af95fc81db900ab9b8933 2 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1322442309d1fffd134f39f7a44af95fc81db900ab9b8933 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:06.860 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.85N 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.85N 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.85N 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e1166c303b8d5767283523b101b26b0f 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yze 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e1166c303b8d5767283523b101b26b0f 0 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e1166c303b8d5767283523b101b26b0f 0 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e1166c303b8d5767283523b101b26b0f 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yze 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yze 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yze 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=46a9b35dd2a20e3d7477becae5864fbd7f8449ca6dbf9d3d08836b35de440216 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VKO 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 46a9b35dd2a20e3d7477becae5864fbd7f8449ca6dbf9d3d08836b35de440216 3 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 46a9b35dd2a20e3d7477becae5864fbd7f8449ca6dbf9d3d08836b35de440216 3 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=46a9b35dd2a20e3d7477becae5864fbd7f8449ca6dbf9d3d08836b35de440216 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VKO 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VKO 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VKO 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78282 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78282 ']' 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.120 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZOn 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sBU ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sBU 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ixq 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6st ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6st 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.evd 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ajM ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ajM 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.85N 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yze ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yze 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VKO 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:07.379 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:07.380 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:07.638 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:07.638 10:59:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:07.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:07.897 Waiting for block devices as requested 00:16:08.156 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:08.156 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:08.725 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:08.985 No valid GPT data, bailing 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:08.985 10:59:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:08.985 No valid GPT data, bailing 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:08.985 No valid GPT data, bailing 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:08.985 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:08.985 No valid GPT data, bailing 00:16:08.986 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -a 10.0.0.1 -t tcp -s 4420 00:16:09.246 00:16:09.246 Discovery Log Number of Records 2, Generation counter 2 00:16:09.246 =====Discovery Log Entry 0====== 00:16:09.246 trtype: tcp 00:16:09.246 adrfam: ipv4 00:16:09.246 subtype: current discovery subsystem 00:16:09.246 treq: not specified, sq flow control disable supported 00:16:09.246 portid: 1 00:16:09.246 trsvcid: 4420 00:16:09.246 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:09.246 traddr: 10.0.0.1 00:16:09.246 eflags: none 00:16:09.246 sectype: none 00:16:09.246 =====Discovery Log Entry 1====== 00:16:09.246 trtype: tcp 00:16:09.246 adrfam: ipv4 00:16:09.246 subtype: nvme subsystem 00:16:09.246 treq: not specified, sq flow control disable supported 00:16:09.246 portid: 1 00:16:09.246 trsvcid: 4420 00:16:09.246 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:09.246 traddr: 10.0.0.1 00:16:09.246 eflags: none 00:16:09.246 sectype: none 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.246 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.506 nvme0n1 00:16:09.506 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.506 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.506 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.506 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.506 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.506 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.507 nvme0n1 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.507 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 nvme0n1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.767 10:59:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.027 nvme0n1 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.027 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.028 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 nvme0n1 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 nvme0n1 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.287 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.547 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 nvme0n1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:10.807 nvme0n1 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.807 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.067 10:59:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.067 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.068 nvme0n1 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.068 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.328 nvme0n1 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.328 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.588 nvme0n1 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:11.588 10:59:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.157 nvme0n1 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.157 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.416 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.417 nvme0n1 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.417 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.677 nvme0n1 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.677 10:59:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 nvme0n1 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.937 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.197 nvme0n1 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:13.197 10:59:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.573 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.574 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.832 nvme0n1 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:14.832 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.833 10:59:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.092 nvme0n1 00:16:15.092 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.092 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.092 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.092 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.092 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.351 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.351 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.351 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.351 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.351 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.352 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 nvme0n1 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.612 10:59:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.871 nvme0n1 00:16:15.871 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.871 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:15.871 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.871 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:15.871 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:15.871 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.131 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.132 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.411 nvme0n1 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.411 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.412 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.982 nvme0n1 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.982 10:59:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.982 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.552 nvme0n1 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.552 10:59:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.122 nvme0n1 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:18.122 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.123 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 nvme0n1 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:18.692 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.693 10:59:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 nvme0n1 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.262 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.263 nvme0n1 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.263 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.523 nvme0n1 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:19.523 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.524 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.783 nvme0n1 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:19.783 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.784 nvme0n1 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:19.784 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:20.043 10:59:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.043 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.044 nvme0n1 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.044 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.304 nvme0n1 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.304 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.305 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.565 nvme0n1 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.565 nvme0n1 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.565 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.825 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 nvme0n1 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:20.826 10:59:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.087 nvme0n1 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.087 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.348 nvme0n1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.348 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.608 nvme0n1 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.608 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.609 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.869 nvme0n1 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.869 10:59:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.128 nvme0n1 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:22.128 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.129 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.388 nvme0n1 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.388 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.648 nvme0n1 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:22.648 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.908 10:59:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.167 nvme0n1 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.167 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 nvme0n1 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.427 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.687 nvme0n1 00:16:23.687 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:23.950 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.951 10:59:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 nvme0n1 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.211 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.780 nvme0n1 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.780 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.781 10:59:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.350 nvme0n1 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.350 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.938 nvme0n1 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.938 10:59:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:25.938 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.939 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.509 nvme0n1 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.509 10:59:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 nvme0n1 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.078 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.338 nvme0n1 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.338 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.339 nvme0n1 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.339 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 nvme0n1 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.599 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.600 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.859 nvme0n1 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.859 10:59:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.859 nvme0n1 00:16:27.859 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.859 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:27.859 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.859 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:27.859 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.859 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.119 nvme0n1 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.119 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.379 nvme0n1 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.379 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.639 nvme0n1 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.639 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 nvme0n1 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.898 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.899 10:59:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.899 nvme0n1 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.899 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.158 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.159 nvme0n1 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.159 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.418 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.419 nvme0n1 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.419 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.678 nvme0n1 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.678 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:29.938 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.939 10:59:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.939 nvme0n1 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:29.939 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.199 nvme0n1 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.199 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.458 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.718 nvme0n1 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:30.718 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.719 10:59:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.978 nvme0n1 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:30.978 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.979 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.547 nvme0n1 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:31.547 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.548 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.807 nvme0n1 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.807 10:59:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 nvme0n1 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjVjMDBiZmFjODllNzc2NWRmZGNkOTVkZDI0MjUyNDl62Zcz: 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: ]] 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MGY5MGUxMzUwNjEzNDZmZGU5MjFiZmU0MTI1Nzg3NGNhMmRlMDY1YzQwYTQ0YzQ0MjY5MjRmOTZjNTQ3YzdmY4vv8CU=: 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.066 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.326 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.584 nvme0n1 00:16:32.584 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.584 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:32.584 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:32.584 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.584 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.584 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:16:32.844 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.845 10:59:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.412 nvme0n1 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:33.412 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.413 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.981 nvme0n1 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTMyMjQ0MjMwOWQxZmZmZDEzNGYzOWY3YTQ0YWY5NWZjODFkYjkwMGFiOWI4OTMz/xj9Og==: 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTExNjZjMzAzYjhkNTc2NzI4MzUyM2IxMDFiMjZiMGaGK4sa: 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.981 10:59:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 nvme0n1 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:34.548 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDZhOWIzNWRkMmEyMGUzZDc0NzdiZWNhZTU4NjRmYmQ3Zjg0NDljYTZkYmY5ZDNkMDg4MzZiMzVkZTQ0MDIxNhJA0RE=: 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.549 10:59:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.117 nvme0n1 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.117 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 request: 00:16:35.118 { 00:16:35.118 "name": "nvme0", 00:16:35.118 "trtype": "tcp", 00:16:35.118 "traddr": "10.0.0.1", 00:16:35.118 "adrfam": "ipv4", 00:16:35.118 "trsvcid": "4420", 00:16:35.118 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:35.118 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:35.118 "prchk_reftag": false, 00:16:35.118 "prchk_guard": false, 00:16:35.118 "hdgst": false, 00:16:35.118 "ddgst": false, 00:16:35.118 "allow_unrecognized_csi": false, 00:16:35.118 "method": "bdev_nvme_attach_controller", 00:16:35.118 "req_id": 1 00:16:35.118 } 00:16:35.118 Got JSON-RPC error response 00:16:35.118 response: 00:16:35.118 { 00:16:35.118 "code": -5, 00:16:35.118 "message": "Input/output error" 00:16:35.118 } 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 request: 00:16:35.118 { 00:16:35.118 "name": "nvme0", 00:16:35.118 "trtype": "tcp", 00:16:35.118 "traddr": "10.0.0.1", 00:16:35.118 "adrfam": "ipv4", 00:16:35.118 "trsvcid": "4420", 00:16:35.118 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:35.118 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:35.118 "prchk_reftag": false, 00:16:35.118 "prchk_guard": false, 00:16:35.118 "hdgst": false, 00:16:35.118 "ddgst": false, 00:16:35.118 "dhchap_key": "key2", 00:16:35.118 "allow_unrecognized_csi": false, 00:16:35.118 "method": "bdev_nvme_attach_controller", 00:16:35.118 "req_id": 1 00:16:35.118 } 00:16:35.118 Got JSON-RPC error response 00:16:35.118 response: 00:16:35.118 { 00:16:35.118 "code": -5, 00:16:35.118 "message": "Input/output error" 00:16:35.118 } 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.118 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.378 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 request: 00:16:35.379 { 00:16:35.379 "name": "nvme0", 00:16:35.379 "trtype": "tcp", 00:16:35.379 "traddr": "10.0.0.1", 00:16:35.379 "adrfam": "ipv4", 00:16:35.379 "trsvcid": "4420", 00:16:35.379 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:16:35.379 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:16:35.379 "prchk_reftag": false, 00:16:35.379 "prchk_guard": false, 00:16:35.379 "hdgst": false, 00:16:35.379 "ddgst": false, 00:16:35.379 "dhchap_key": "key1", 00:16:35.379 "dhchap_ctrlr_key": "ckey2", 00:16:35.379 "allow_unrecognized_csi": false, 00:16:35.379 "method": "bdev_nvme_attach_controller", 00:16:35.379 "req_id": 1 00:16:35.379 } 00:16:35.379 Got JSON-RPC error response 00:16:35.379 response: 00:16:35.379 { 00:16:35.379 "code": -5, 00:16:35.379 "message": "Input/output error" 00:16:35.379 } 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 nvme0n1 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 request: 00:16:35.379 { 00:16:35.379 "name": "nvme0", 00:16:35.379 "dhchap_key": "key1", 00:16:35.379 "dhchap_ctrlr_key": "ckey2", 00:16:35.379 "method": "bdev_nvme_set_keys", 00:16:35.379 "req_id": 1 00:16:35.379 } 00:16:35.379 Got JSON-RPC error response 00:16:35.379 response: 00:16:35.379 { 00:16:35.379 "code": -13, 00:16:35.379 "message": "Permission denied" 00:16:35.379 } 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:35.379 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.639 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:16:35.639 10:59:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjQ3MzdiNjNhZWQxOGZkYTJjYTQwYjEwOWRhZGJkYzIxZjVmODY4OTQzZmRhNzM3kUD87g==: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE5NjViMmI2MGJlYzUxN2I4ZTVjMTgzNGRlMWFhMzJlNGYyNmYzZWNhMWU4ZTMxElrHLQ==: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.576 nvme0n1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTY1NDAzYjRmYjIxMGRlYmRmMGQwNGYwOTBhMmZhYTVVdLXR: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDY2NTc3YzQ3NzU2ZDU5ZjAxOTA4YjFmODU0N2MwZGMqq8NC: 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.576 request: 00:16:36.576 { 00:16:36.576 "name": "nvme0", 00:16:36.576 "dhchap_key": "key2", 00:16:36.576 "dhchap_ctrlr_key": "ckey1", 00:16:36.576 "method": "bdev_nvme_set_keys", 00:16:36.576 "req_id": 1 00:16:36.576 } 00:16:36.576 Got JSON-RPC error response 00:16:36.576 response: 00:16:36.576 { 00:16:36.576 "code": -13, 00:16:36.576 "message": "Permission denied" 00:16:36.576 } 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.576 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.836 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.836 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:16:36.836 10:59:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:37.774 rmmod nvme_tcp 00:16:37.774 rmmod nvme_fabrics 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78282 ']' 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78282 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78282 ']' 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78282 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78282 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.774 killing process with pid 78282 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78282' 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78282 00:16:37.774 10:59:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78282 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:38.034 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:38.293 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:38.294 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:38.294 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:38.294 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:38.552 10:59:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:39.119 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.378 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.378 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:39.378 10:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZOn /tmp/spdk.key-null.ixq /tmp/spdk.key-sha256.evd /tmp/spdk.key-sha384.85N /tmp/spdk.key-sha512.VKO /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:16:39.378 10:59:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:39.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:39.946 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:39.946 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:39.946 00:16:39.946 real 0m35.141s 00:16:39.946 user 0m32.861s 00:16:39.946 sys 0m4.985s 00:16:39.946 10:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.946 10:59:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.946 ************************************ 00:16:39.946 END TEST nvmf_auth_host 00:16:39.946 ************************************ 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.205 ************************************ 00:16:40.205 START TEST nvmf_digest 00:16:40.205 ************************************ 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:40.205 * Looking for test storage... 00:16:40.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:16:40.205 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.206 --rc genhtml_branch_coverage=1 00:16:40.206 --rc genhtml_function_coverage=1 00:16:40.206 --rc genhtml_legend=1 00:16:40.206 --rc geninfo_all_blocks=1 00:16:40.206 --rc geninfo_unexecuted_blocks=1 00:16:40.206 00:16:40.206 ' 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.206 --rc genhtml_branch_coverage=1 00:16:40.206 --rc genhtml_function_coverage=1 00:16:40.206 --rc genhtml_legend=1 00:16:40.206 --rc geninfo_all_blocks=1 00:16:40.206 --rc geninfo_unexecuted_blocks=1 00:16:40.206 00:16:40.206 ' 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.206 --rc genhtml_branch_coverage=1 00:16:40.206 --rc genhtml_function_coverage=1 00:16:40.206 --rc genhtml_legend=1 00:16:40.206 --rc geninfo_all_blocks=1 00:16:40.206 --rc geninfo_unexecuted_blocks=1 00:16:40.206 00:16:40.206 ' 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:40.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:40.206 --rc genhtml_branch_coverage=1 00:16:40.206 --rc genhtml_function_coverage=1 00:16:40.206 --rc genhtml_legend=1 00:16:40.206 --rc geninfo_all_blocks=1 00:16:40.206 --rc geninfo_unexecuted_blocks=1 00:16:40.206 00:16:40.206 ' 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.206 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:40.465 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:40.465 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:40.466 Cannot find device "nvmf_init_br" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:40.466 Cannot find device "nvmf_init_br2" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:40.466 Cannot find device "nvmf_tgt_br" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:40.466 Cannot find device "nvmf_tgt_br2" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:40.466 Cannot find device "nvmf_init_br" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:40.466 Cannot find device "nvmf_init_br2" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:40.466 Cannot find device "nvmf_tgt_br" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:40.466 Cannot find device "nvmf_tgt_br2" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:40.466 Cannot find device "nvmf_br" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:40.466 Cannot find device "nvmf_init_if" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:40.466 Cannot find device "nvmf_init_if2" 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:40.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:40.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:40.466 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:40.724 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:40.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:40.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:16:40.725 00:16:40.725 --- 10.0.0.3 ping statistics --- 00:16:40.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.725 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:40.725 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:40.725 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.098 ms 00:16:40.725 00:16:40.725 --- 10.0.0.4 ping statistics --- 00:16:40.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.725 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:40.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:16:40.725 00:16:40.725 --- 10.0.0.1 ping statistics --- 00:16:40.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.725 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:40.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:16:40.725 00:16:40.725 --- 10.0.0.2 ping statistics --- 00:16:40.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.725 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:40.725 ************************************ 00:16:40.725 START TEST nvmf_digest_clean 00:16:40.725 ************************************ 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:16:40.725 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79916 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79916 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79916 ']' 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.984 10:59:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:40.984 [2024-12-09 10:59:33.961317] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:40.984 [2024-12-09 10:59:33.961399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.984 [2024-12-09 10:59:34.111385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.984 [2024-12-09 10:59:34.155124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.984 [2024-12-09 10:59:34.155215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.984 [2024-12-09 10:59:34.155222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.984 [2024-12-09 10:59:34.155226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.984 [2024-12-09 10:59:34.155241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.984 [2024-12-09 10:59:34.155522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 [2024-12-09 10:59:34.906809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:41.921 null0 00:16:41.921 [2024-12-09 10:59:34.953493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.921 [2024-12-09 10:59:34.977543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79947 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79947 /var/tmp/bperf.sock 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 79947 ']' 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.921 10:59:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:41.921 [2024-12-09 10:59:35.037940] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:41.921 [2024-12-09 10:59:35.038001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79947 ] 00:16:42.180 [2024-12-09 10:59:35.189914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.180 [2024-12-09 10:59:35.236840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.747 10:59:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.747 10:59:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:42.747 10:59:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:42.748 10:59:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:42.748 10:59:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:43.006 [2024-12-09 10:59:36.112105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:43.006 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:43.006 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:43.265 nvme0n1 00:16:43.265 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:43.265 10:59:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:43.524 Running I/O for 2 seconds... 00:16:45.397 20193.00 IOPS, 78.88 MiB/s [2024-12-09T10:59:38.576Z] 20193.00 IOPS, 78.88 MiB/s 00:16:45.397 Latency(us) 00:16:45.397 [2024-12-09T10:59:38.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.397 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:45.397 nvme0n1 : 2.00 20208.72 78.94 0.00 0.00 6330.16 5895.38 18544.68 00:16:45.397 [2024-12-09T10:59:38.576Z] =================================================================================================================== 00:16:45.397 [2024-12-09T10:59:38.576Z] Total : 20208.72 78.94 0.00 0.00 6330.16 5895.38 18544.68 00:16:45.397 { 00:16:45.397 "results": [ 00:16:45.397 { 00:16:45.397 "job": "nvme0n1", 00:16:45.397 "core_mask": "0x2", 00:16:45.397 "workload": "randread", 00:16:45.397 "status": "finished", 00:16:45.397 "queue_depth": 128, 00:16:45.397 "io_size": 4096, 00:16:45.397 "runtime": 2.004778, 00:16:45.397 "iops": 20208.72136465983, 00:16:45.397 "mibps": 78.94031783070245, 00:16:45.397 "io_failed": 0, 00:16:45.397 "io_timeout": 0, 00:16:45.397 "avg_latency_us": 6330.159165811031, 00:16:45.397 "min_latency_us": 5895.378165938864, 00:16:45.397 "max_latency_us": 18544.684716157204 00:16:45.397 } 00:16:45.397 ], 00:16:45.397 "core_count": 1 00:16:45.397 } 00:16:45.397 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:45.397 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:45.397 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:45.397 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:45.397 | select(.opcode=="crc32c") 00:16:45.397 | "\(.module_name) \(.executed)"' 00:16:45.397 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79947 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79947 ']' 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79947 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79947 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:45.656 killing process with pid 79947 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79947' 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79947 00:16:45.656 Received shutdown signal, test time was about 2.000000 seconds 00:16:45.656 00:16:45.656 Latency(us) 00:16:45.656 [2024-12-09T10:59:38.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.656 [2024-12-09T10:59:38.835Z] =================================================================================================================== 00:16:45.656 [2024-12-09T10:59:38.835Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.656 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79947 00:16:45.919 10:59:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80003 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80003 /var/tmp/bperf.sock 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80003 ']' 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:45.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.919 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:45.919 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:45.919 Zero copy mechanism will not be used. 00:16:45.919 [2024-12-09 10:59:39.052901] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:45.919 [2024-12-09 10:59:39.052963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80003 ] 00:16:46.187 [2024-12-09 10:59:39.205836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.187 [2024-12-09 10:59:39.250970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.791 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.791 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:46.791 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:46.791 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:46.791 10:59:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:47.067 [2024-12-09 10:59:40.114281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.067 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:47.067 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:47.326 nvme0n1 00:16:47.326 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:47.326 10:59:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:47.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:47.326 Zero copy mechanism will not be used. 00:16:47.326 Running I/O for 2 seconds... 00:16:49.642 8960.00 IOPS, 1120.00 MiB/s [2024-12-09T10:59:42.821Z] 9000.00 IOPS, 1125.00 MiB/s 00:16:49.642 Latency(us) 00:16:49.642 [2024-12-09T10:59:42.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.642 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:49.642 nvme0n1 : 2.00 8998.61 1124.83 0.00 0.00 1775.56 1659.86 11046.68 00:16:49.642 [2024-12-09T10:59:42.821Z] =================================================================================================================== 00:16:49.642 [2024-12-09T10:59:42.821Z] Total : 8998.61 1124.83 0.00 0.00 1775.56 1659.86 11046.68 00:16:49.642 { 00:16:49.642 "results": [ 00:16:49.642 { 00:16:49.642 "job": "nvme0n1", 00:16:49.642 "core_mask": "0x2", 00:16:49.642 "workload": "randread", 00:16:49.642 "status": "finished", 00:16:49.642 "queue_depth": 16, 00:16:49.642 "io_size": 131072, 00:16:49.642 "runtime": 2.002088, 00:16:49.642 "iops": 8998.605455904037, 00:16:49.642 "mibps": 1124.8256819880046, 00:16:49.642 "io_failed": 0, 00:16:49.642 "io_timeout": 0, 00:16:49.642 "avg_latency_us": 1775.561649615674, 00:16:49.642 "min_latency_us": 1659.8637554585152, 00:16:49.642 "max_latency_us": 11046.679475982533 00:16:49.642 } 00:16:49.642 ], 00:16:49.642 "core_count": 1 00:16:49.642 } 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:49.642 | select(.opcode=="crc32c") 00:16:49.642 | "\(.module_name) \(.executed)"' 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80003 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80003 ']' 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80003 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80003 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:49.642 killing process with pid 80003 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80003' 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80003 00:16:49.642 Received shutdown signal, test time was about 2.000000 seconds 00:16:49.642 00:16:49.642 Latency(us) 00:16:49.642 [2024-12-09T10:59:42.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.642 [2024-12-09T10:59:42.821Z] =================================================================================================================== 00:16:49.642 [2024-12-09T10:59:42.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.642 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80003 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80062 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80062 /var/tmp/bperf.sock 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80062 ']' 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.902 10:59:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:49.902 [2024-12-09 10:59:43.006260] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:49.902 [2024-12-09 10:59:43.006341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80062 ] 00:16:50.161 [2024-12-09 10:59:43.159430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.161 [2024-12-09 10:59:43.205410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.730 10:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.730 10:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:50.730 10:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:50.730 10:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:50.730 10:59:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:50.989 [2024-12-09 10:59:44.064822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:50.989 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.989 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:51.249 nvme0n1 00:16:51.249 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:51.249 10:59:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:51.508 Running I/O for 2 seconds... 00:16:53.391 21591.00 IOPS, 84.34 MiB/s [2024-12-09T10:59:46.570Z] 21654.00 IOPS, 84.59 MiB/s 00:16:53.391 Latency(us) 00:16:53.391 [2024-12-09T10:59:46.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.391 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.391 nvme0n1 : 2.01 21641.29 84.54 0.00 0.00 5909.84 1888.81 11847.99 00:16:53.391 [2024-12-09T10:59:46.570Z] =================================================================================================================== 00:16:53.391 [2024-12-09T10:59:46.570Z] Total : 21641.29 84.54 0.00 0.00 5909.84 1888.81 11847.99 00:16:53.391 { 00:16:53.391 "results": [ 00:16:53.391 { 00:16:53.391 "job": "nvme0n1", 00:16:53.391 "core_mask": "0x2", 00:16:53.391 "workload": "randwrite", 00:16:53.391 "status": "finished", 00:16:53.391 "queue_depth": 128, 00:16:53.391 "io_size": 4096, 00:16:53.391 "runtime": 2.007089, 00:16:53.391 "iops": 21641.292438950142, 00:16:53.391 "mibps": 84.536298589649, 00:16:53.391 "io_failed": 0, 00:16:53.391 "io_timeout": 0, 00:16:53.391 "avg_latency_us": 5909.844310175167, 00:16:53.391 "min_latency_us": 1888.810480349345, 00:16:53.391 "max_latency_us": 11847.993013100437 00:16:53.391 } 00:16:53.391 ], 00:16:53.391 "core_count": 1 00:16:53.391 } 00:16:53.391 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:53.391 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:53.391 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:53.391 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:53.391 | select(.opcode=="crc32c") 00:16:53.391 | "\(.module_name) \(.executed)"' 00:16:53.391 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80062 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80062 ']' 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80062 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80062 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80062' 00:16:53.651 killing process with pid 80062 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80062 00:16:53.651 Received shutdown signal, test time was about 2.000000 seconds 00:16:53.651 00:16:53.651 Latency(us) 00:16:53.651 [2024-12-09T10:59:46.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.651 [2024-12-09T10:59:46.830Z] =================================================================================================================== 00:16:53.651 [2024-12-09T10:59:46.830Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.651 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80062 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80119 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80119 /var/tmp/bperf.sock 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80119 ']' 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:53.910 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:53.911 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.911 10:59:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:53.911 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:53.911 Zero copy mechanism will not be used. 00:16:53.911 [2024-12-09 10:59:47.010558] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:53.911 [2024-12-09 10:59:47.010621] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80119 ] 00:16:54.170 [2024-12-09 10:59:47.161558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.170 [2024-12-09 10:59:47.208689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.740 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.740 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:16:54.740 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:16:54.740 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:16:54.740 10:59:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:54.999 [2024-12-09 10:59:48.060430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:54.999 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:54.999 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:55.259 nvme0n1 00:16:55.259 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:16:55.259 10:59:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:55.519 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:55.519 Zero copy mechanism will not be used. 00:16:55.519 Running I/O for 2 seconds... 00:16:57.392 9223.00 IOPS, 1152.88 MiB/s [2024-12-09T10:59:50.571Z] 9186.50 IOPS, 1148.31 MiB/s 00:16:57.392 Latency(us) 00:16:57.392 [2024-12-09T10:59:50.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.392 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:16:57.392 nvme0n1 : 2.00 9182.64 1147.83 0.00 0.00 1739.20 1166.20 11218.39 00:16:57.392 [2024-12-09T10:59:50.571Z] =================================================================================================================== 00:16:57.392 [2024-12-09T10:59:50.571Z] Total : 9182.64 1147.83 0.00 0.00 1739.20 1166.20 11218.39 00:16:57.392 { 00:16:57.392 "results": [ 00:16:57.392 { 00:16:57.392 "job": "nvme0n1", 00:16:57.392 "core_mask": "0x2", 00:16:57.392 "workload": "randwrite", 00:16:57.392 "status": "finished", 00:16:57.392 "queue_depth": 16, 00:16:57.392 "io_size": 131072, 00:16:57.392 "runtime": 2.003237, 00:16:57.392 "iops": 9182.637900557947, 00:16:57.393 "mibps": 1147.8297375697434, 00:16:57.393 "io_failed": 0, 00:16:57.393 "io_timeout": 0, 00:16:57.393 "avg_latency_us": 1739.2024456997167, 00:16:57.393 "min_latency_us": 1166.1973799126638, 00:16:57.393 "max_latency_us": 11218.389519650655 00:16:57.393 } 00:16:57.393 ], 00:16:57.393 "core_count": 1 00:16:57.393 } 00:16:57.393 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:16:57.393 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:16:57.393 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:57.393 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:57.393 | select(.opcode=="crc32c") 00:16:57.393 | "\(.module_name) \(.executed)"' 00:16:57.393 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80119 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80119 ']' 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80119 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80119 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:57.652 killing process with pid 80119 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80119' 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80119 00:16:57.652 Received shutdown signal, test time was about 2.000000 seconds 00:16:57.652 00:16:57.652 Latency(us) 00:16:57.652 [2024-12-09T10:59:50.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.652 [2024-12-09T10:59:50.831Z] =================================================================================================================== 00:16:57.652 [2024-12-09T10:59:50.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.652 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80119 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79916 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 79916 ']' 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 79916 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79916 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.912 killing process with pid 79916 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79916' 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 79916 00:16:57.912 10:59:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 79916 00:16:58.172 00:16:58.172 real 0m17.270s 00:16:58.172 user 0m32.858s 00:16:58.172 sys 0m4.421s 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:16:58.172 ************************************ 00:16:58.172 END TEST nvmf_digest_clean 00:16:58.172 ************************************ 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:16:58.172 ************************************ 00:16:58.172 START TEST nvmf_digest_error 00:16:58.172 ************************************ 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80202 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80202 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80202 ']' 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.172 10:59:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:58.172 [2024-12-09 10:59:51.309535] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:58.172 [2024-12-09 10:59:51.309599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.432 [2024-12-09 10:59:51.459734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.432 [2024-12-09 10:59:51.507856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.432 [2024-12-09 10:59:51.507897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.432 [2024-12-09 10:59:51.507919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.432 [2024-12-09 10:59:51.507924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.432 [2024-12-09 10:59:51.507929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.432 [2024-12-09 10:59:51.508197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 [2024-12-09 10:59:52.247222] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 [2024-12-09 10:59:52.299561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:59.372 null0 00:16:59.372 [2024-12-09 10:59:52.345800] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.372 [2024-12-09 10:59:52.369853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80234 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80234 /var/tmp/bperf.sock 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80234 ']' 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.372 10:59:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:16:59.372 [2024-12-09 10:59:52.429690] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:16:59.372 [2024-12-09 10:59:52.429761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80234 ] 00:16:59.632 [2024-12-09 10:59:52.580935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.632 [2024-12-09 10:59:52.626004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.632 [2024-12-09 10:59:52.666594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:00.201 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.201 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:00.201 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:00.201 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:00.461 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:00.461 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.461 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:00.461 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.461 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:00.461 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:00.720 nvme0n1 00:17:00.720 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:00.721 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.721 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:00.721 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.721 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:00.721 10:59:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:00.721 Running I/O for 2 seconds... 00:17:00.721 [2024-12-09 10:59:53.832499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.721 [2024-12-09 10:59:53.832546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.721 [2024-12-09 10:59:53.832555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.721 [2024-12-09 10:59:53.845348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.721 [2024-12-09 10:59:53.845382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.721 [2024-12-09 10:59:53.845390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.721 [2024-12-09 10:59:53.858200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.721 [2024-12-09 10:59:53.858228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.721 [2024-12-09 10:59:53.858252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.721 [2024-12-09 10:59:53.870968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.721 [2024-12-09 10:59:53.870996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.721 [2024-12-09 10:59:53.871004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.721 [2024-12-09 10:59:53.883644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.721 [2024-12-09 10:59:53.883674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.721 [2024-12-09 10:59:53.883681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.721 [2024-12-09 10:59:53.896480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.721 [2024-12-09 10:59:53.896510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.721 [2024-12-09 10:59:53.896534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.909683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.909711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.909719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.922593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.922623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.922631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.935410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.935438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.935446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.948530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.948561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.948585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.961820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.961846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.961853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.974719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.974757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.974766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:53.987673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:53.987701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:53.987708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.000544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.000573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.000581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.013382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.013410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.013418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.026234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.026265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.026272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.039053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.039082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.039090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.051859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.051886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.051893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.064718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.064754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.064762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.077593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.077622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.077629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.090375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.090403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.090426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.103041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.103068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.103076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.115872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.115903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.115910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.128747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.128788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.981 [2024-12-09 10:59:54.128797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.981 [2024-12-09 10:59:54.141532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.981 [2024-12-09 10:59:54.141560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.982 [2024-12-09 10:59:54.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:00.982 [2024-12-09 10:59:54.154312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:00.982 [2024-12-09 10:59:54.154340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:00.982 [2024-12-09 10:59:54.154347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.167467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.167495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.167502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.180396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.180425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.180432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.193270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.193299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.206018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.206045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.206051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.218763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.218815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.218823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.231624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.231656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.231663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.244408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.244436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.244444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.257197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.257225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.257232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.269958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.269985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.269992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.282879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.282908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.282916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.296554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.296586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.296594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.309399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.309428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.309436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.322223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.322250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.322257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.335261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.335289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.335297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.348330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.348362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.242 [2024-12-09 10:59:54.348371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.242 [2024-12-09 10:59:54.361663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.242 [2024-12-09 10:59:54.361692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.243 [2024-12-09 10:59:54.361715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.243 [2024-12-09 10:59:54.374729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.243 [2024-12-09 10:59:54.374766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.243 [2024-12-09 10:59:54.374773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.243 [2024-12-09 10:59:54.387849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.243 [2024-12-09 10:59:54.387878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.243 [2024-12-09 10:59:54.387885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.243 [2024-12-09 10:59:54.400972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.243 [2024-12-09 10:59:54.401002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.243 [2024-12-09 10:59:54.401009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.243 [2024-12-09 10:59:54.414029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.243 [2024-12-09 10:59:54.414056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.243 [2024-12-09 10:59:54.414064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.427318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.427349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.427357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.440486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.440516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.440540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.453549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.453577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.453585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.466411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.466441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.466449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.479403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.479430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.479437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.492230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.492257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.492265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.505048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.505077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.505085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.517920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.517946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.517954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.530593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.530620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.530643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.543403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.543431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.543438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.556510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.556539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.556562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.570121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.570150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.570157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.583573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.583602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.583609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.596769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.596795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.596801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.609535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.503 [2024-12-09 10:59:54.609566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.503 [2024-12-09 10:59:54.609589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.503 [2024-12-09 10:59:54.622326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.504 [2024-12-09 10:59:54.622352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.504 [2024-12-09 10:59:54.622359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.504 [2024-12-09 10:59:54.635093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.504 [2024-12-09 10:59:54.635120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.504 [2024-12-09 10:59:54.635127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.504 [2024-12-09 10:59:54.653389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.504 [2024-12-09 10:59:54.653420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.504 [2024-12-09 10:59:54.653427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.504 [2024-12-09 10:59:54.666200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.504 [2024-12-09 10:59:54.666228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.504 [2024-12-09 10:59:54.666235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.504 [2024-12-09 10:59:54.679063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.504 [2024-12-09 10:59:54.679090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.504 [2024-12-09 10:59:54.679097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.692196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.692225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.705047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.705076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.705084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.717806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.717832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.717839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.730651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.730681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.730689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.743386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.743413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.743420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.756343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.756373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.756381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.769101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.769131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.769138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.781833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.781860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.781867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.794576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.794604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.794612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.807373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.807401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.807408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 19482.00 IOPS, 76.10 MiB/s [2024-12-09T10:59:54.943Z] [2024-12-09 10:59:54.820544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.820574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.820581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.833300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.833330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.833337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.846095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.846123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.846130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.858833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.858861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.858884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.871611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.764 [2024-12-09 10:59:54.871639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.764 [2024-12-09 10:59:54.871646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.764 [2024-12-09 10:59:54.884373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.765 [2024-12-09 10:59:54.884401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.765 [2024-12-09 10:59:54.884409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.765 [2024-12-09 10:59:54.897262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.765 [2024-12-09 10:59:54.897289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.765 [2024-12-09 10:59:54.897313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.765 [2024-12-09 10:59:54.910072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.765 [2024-12-09 10:59:54.910100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.765 [2024-12-09 10:59:54.910107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.765 [2024-12-09 10:59:54.922820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.765 [2024-12-09 10:59:54.922846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.765 [2024-12-09 10:59:54.922853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:01.765 [2024-12-09 10:59:54.935542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:01.765 [2024-12-09 10:59:54.935570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:01.765 [2024-12-09 10:59:54.935577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:54.948678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:54.948709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:54.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:54.961670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:54.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:54.961707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:54.974398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:54.974425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:54.974432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:54.987039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:54.987067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:54.987074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:54.999776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:54.999802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:54.999809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.012596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.012624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.012631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.025536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.025565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.025572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.038628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.038656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.038664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.051449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.051478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.051485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.064366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.064397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.064404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.077385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.077415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.090337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.090365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.090372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.103195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.103222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.032 [2024-12-09 10:59:55.103229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.032 [2024-12-09 10:59:55.116126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.032 [2024-12-09 10:59:55.116155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.116163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.128937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.128966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.128973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.141705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.141732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.141739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.154429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.154456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.154463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.167230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.167256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.167263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.180028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.180071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.180078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.192879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.192907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.192915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.033 [2024-12-09 10:59:55.205895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.033 [2024-12-09 10:59:55.205933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.033 [2024-12-09 10:59:55.205940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.219047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.219090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.219098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.232001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.232050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.232057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.244837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.244865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.244872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.257602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.257632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.257639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.270433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.270463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.270470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.283089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.283115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.283122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.295850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.295876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.295883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.308614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.308641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.308649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.321403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.321431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.321438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.334162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.334188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.334195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.346891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.346917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.346924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.359657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.359686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.359708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.372621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.372652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.372659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.385469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.385498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.385505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.398282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.398309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.398316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.411034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.411061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.411068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.423797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.423824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.423831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.436517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.436544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.436551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.449366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.449393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.449400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.295 [2024-12-09 10:59:55.462452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.295 [2024-12-09 10:59:55.462480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.295 [2024-12-09 10:59:55.462487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.482300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.482331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.482338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.495409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.495438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.495446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.508389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.508418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.508425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.521462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.521490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.521497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.534429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.534457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.534465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.547391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.547421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.547428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.560329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.560355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.560363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.573426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.573455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.573462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.586835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.586863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.586870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.600354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.600387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.600396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.614054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.614084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.614107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.627180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.627207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.555 [2024-12-09 10:59:55.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.555 [2024-12-09 10:59:55.639971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.555 [2024-12-09 10:59:55.639998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.640026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.652815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.652840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.652847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.665494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.665524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.665531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.678244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.678272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.678295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.691023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.691049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.691056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.703794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.703819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.703827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.716585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.716613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.716620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.556 [2024-12-09 10:59:55.729453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.556 [2024-12-09 10:59:55.729496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.556 [2024-12-09 10:59:55.729503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 [2024-12-09 10:59:55.742718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.815 [2024-12-09 10:59:55.742754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.815 [2024-12-09 10:59:55.742762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 [2024-12-09 10:59:55.755465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.815 [2024-12-09 10:59:55.755492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.815 [2024-12-09 10:59:55.755499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 [2024-12-09 10:59:55.768287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.815 [2024-12-09 10:59:55.768315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.815 [2024-12-09 10:59:55.768322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 [2024-12-09 10:59:55.781141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.815 [2024-12-09 10:59:55.781169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.815 [2024-12-09 10:59:55.781178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 [2024-12-09 10:59:55.794026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.815 [2024-12-09 10:59:55.794054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.815 [2024-12-09 10:59:55.794061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 [2024-12-09 10:59:55.806781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x153db50) 00:17:02.815 [2024-12-09 10:59:55.806811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.815 [2024-12-09 10:59:55.806834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.815 19545.00 IOPS, 76.35 MiB/s 00:17:02.815 Latency(us) 00:17:02.815 [2024-12-09T10:59:55.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.815 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:02.815 nvme0n1 : 2.00 19566.28 76.43 0.00 0.00 6538.00 6038.47 24840.72 00:17:02.815 [2024-12-09T10:59:55.994Z] =================================================================================================================== 00:17:02.815 [2024-12-09T10:59:55.994Z] Total : 19566.28 76.43 0.00 0.00 6538.00 6038.47 24840.72 00:17:02.815 { 00:17:02.815 "results": [ 00:17:02.815 { 00:17:02.815 "job": "nvme0n1", 00:17:02.815 "core_mask": "0x2", 00:17:02.815 "workload": "randread", 00:17:02.815 "status": "finished", 00:17:02.815 "queue_depth": 128, 00:17:02.815 "io_size": 4096, 00:17:02.815 "runtime": 2.004367, 00:17:02.815 "iops": 19566.277034096052, 00:17:02.815 "mibps": 76.4307696644377, 00:17:02.815 "io_failed": 0, 00:17:02.815 "io_timeout": 0, 00:17:02.815 "avg_latency_us": 6538.00396030608, 00:17:02.815 "min_latency_us": 6038.469868995633, 00:17:02.815 "max_latency_us": 24840.71965065502 00:17:02.815 } 00:17:02.815 ], 00:17:02.815 "core_count": 1 00:17:02.815 } 00:17:02.815 10:59:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:02.815 10:59:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:02.815 10:59:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:02.815 | .driver_specific 00:17:02.815 | .nvme_error 00:17:02.815 | .status_code 00:17:02.815 | .command_transient_transport_error' 00:17:02.815 10:59:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80234 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80234 ']' 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80234 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80234 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:03.074 killing process with pid 80234 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80234' 00:17:03.074 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80234 00:17:03.074 Received shutdown signal, test time was about 2.000000 seconds 00:17:03.074 00:17:03.074 Latency(us) 00:17:03.074 [2024-12-09T10:59:56.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.075 [2024-12-09T10:59:56.254Z] =================================================================================================================== 00:17:03.075 [2024-12-09T10:59:56.254Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.075 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80234 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80289 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80289 /var/tmp/bperf.sock 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80289 ']' 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.334 10:59:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:03.334 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:03.334 Zero copy mechanism will not be used. 00:17:03.334 [2024-12-09 10:59:56.348924] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:17:03.334 [2024-12-09 10:59:56.348988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80289 ] 00:17:03.334 [2024-12-09 10:59:56.500222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.593 [2024-12-09 10:59:56.546221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.594 [2024-12-09 10:59:56.586997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.162 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.162 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:04.162 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:04.162 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:04.422 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:04.422 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.422 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:04.422 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.422 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:04.423 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:04.683 nvme0n1 00:17:04.683 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:04.683 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.684 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:04.684 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.684 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:04.684 10:59:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:04.684 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:04.684 Zero copy mechanism will not be used. 00:17:04.684 Running I/O for 2 seconds... 00:17:04.684 [2024-12-09 10:59:57.738921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.738968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.738978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.742725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.742770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.742779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.746495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.746529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.746538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.750211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.750241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.750249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.753906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.753938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.753945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.757587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.757618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.757625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.761319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.761351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.761358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.765069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.765099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.765121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.768758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.768784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.768791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.772458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.772493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.776125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.776154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.776161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.779785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.779812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.779819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.783457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.783486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.783493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.787097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.787126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.787133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.790832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.790870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.794437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.794466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.794474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.798186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.798218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.798226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.801933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.801963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.801970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.805578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.805607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.805614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.809252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.809281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.809288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.812905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.812936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.812943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.816625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.816657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.816665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.820298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.820328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.820351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.824175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.824204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.824212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.827842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.827868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.684 [2024-12-09 10:59:57.827875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.684 [2024-12-09 10:59:57.831479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.684 [2024-12-09 10:59:57.831509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.831517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.835133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.835164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.835171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.838807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.838837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.838844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.842482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.842514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.842521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.846133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.846165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.846172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.849728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.849764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.849772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.853352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.853381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.853388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.856989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.857020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.857028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.685 [2024-12-09 10:59:57.860760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.685 [2024-12-09 10:59:57.860800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.685 [2024-12-09 10:59:57.860808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.864425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.864457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.864465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.868083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.868114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.868137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.871693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.871722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.871729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.875423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.875468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.875491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.879169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.879199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.879207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.882767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.882813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.882820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.886439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.886471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.886478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.890138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.890170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.890177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.893789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.893814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.893821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.897385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.897413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.897420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.900994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.901025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.901032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.904592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.904623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.904631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.908187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.908217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.908224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.911841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.911871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.911878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.915477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.915507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.915515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.919139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.919170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.919177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.922713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.922742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.922758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.926289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.926318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.926325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.929954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.929982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.929989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.933507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.933535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.933558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.937188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.937216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.937224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.954 [2024-12-09 10:59:57.940770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.954 [2024-12-09 10:59:57.940798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.954 [2024-12-09 10:59:57.940805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.944363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.944393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.944401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.947978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.948009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.948022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.951583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.951614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.951622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.955225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.955254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.955262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.958910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.958946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.958954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.962536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.962566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.962573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.966137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.966165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.966172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.969858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.969887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.969894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.973474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.973502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.973509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.977119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.977149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.977156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.980732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.980771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.980778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.984328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.984358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.984366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.988008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.988044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.988051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.991629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.991657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.991664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.995289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.995317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.995324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:57.998932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:57.998961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:57.998968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.002542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.002570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.002577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.006182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.006212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.006219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.009869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.009900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.009907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.013498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.013528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.013535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.017133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.017165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.017172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.020702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.020732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.020739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.024270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.024299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.024306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.027935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.027963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.027970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.031594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.031623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.031630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.035301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.035329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.035336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.038909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.955 [2024-12-09 10:59:58.038939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.955 [2024-12-09 10:59:58.038946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.955 [2024-12-09 10:59:58.042604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.042635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.042657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.046287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.046317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.046324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.049932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.049962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.049969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.053535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.053564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.053571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.057206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.057234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.057241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.060851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.060882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.060889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.064503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.064534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.064541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.068138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.068168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.068175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.071763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.071791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.071798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.075447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.075478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.075485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.079132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.079163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.079170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.082720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.082758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.082766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.086313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.086341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.086348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.090092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.090120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.090127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.093674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.093704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.093711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.097283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.097312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.097319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.100912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.100942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.100949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.104513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.104542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.104550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.108116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.108147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.108154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.111756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.111789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.115382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.115410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.115417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.119133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.119171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.123449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.123484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.123492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:04.956 [2024-12-09 10:59:58.127409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:04.956 [2024-12-09 10:59:58.127443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.956 [2024-12-09 10:59:58.127450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.131352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.131385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.131392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.135310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.135343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.135350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.139173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.139205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.139212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.143017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.143050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.143057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.146895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.146928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.146936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.150675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.150706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.150730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.154393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.232 [2024-12-09 10:59:58.154422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.232 [2024-12-09 10:59:58.154429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.232 [2024-12-09 10:59:58.158106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.158134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.158141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.161792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.161821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.161828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.165525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.165555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.165577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.169172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.169203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.169210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.172832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.172860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.172867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.176485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.176514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.176521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.180176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.180205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.180212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.183798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.183825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.183832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.187399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.187428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.187435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.191021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.191052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.191058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.194608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.194639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.194646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.198254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.198285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.198292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.201863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.201890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.201896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.205454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.205483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.205505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.209107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.209139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.209146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.212681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.212712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.216303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.216334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.216340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.219921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.219950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.219957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.223562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.233 [2024-12-09 10:59:58.223592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.233 [2024-12-09 10:59:58.223599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.233 [2024-12-09 10:59:58.227170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.227200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.227208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.230741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.230794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.230801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.234373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.234401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.234408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.238019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.238047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.238054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.241660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.241689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.241696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.245284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.245312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.245319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.248838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.248868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.248875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.252423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.252452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.252475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.256066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.256094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.256101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.259679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.259706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.259713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.263330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.263359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.263366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.266954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.266983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.234 [2024-12-09 10:59:58.266990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.234 [2024-12-09 10:59:58.270548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.234 [2024-12-09 10:59:58.270578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.270600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.274269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.274298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.274306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.277897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.277924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.277931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.281576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.281610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.281617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.285214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.285251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.288843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.288873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.288881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.292503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.292534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.292541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.296159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.296189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.296196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.299730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.299767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.299775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.303339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.303368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.303375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.306938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.306968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.306975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.310523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.310553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.310575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.314213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.314242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.314249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.317841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.317873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.317880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.321460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.321490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.321497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.325116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.325146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.325154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.328651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.328679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.328687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.332330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.332358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.332381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.335959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.335987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.335995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.235 [2024-12-09 10:59:58.339586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.235 [2024-12-09 10:59:58.339614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.235 [2024-12-09 10:59:58.339621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.343293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.343322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.343330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.346911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.346939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.346946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.350481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.350511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.350518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.354125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.354153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.354161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.357683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.357714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.357721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.361276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.361307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.361315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.364920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.364951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.364958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.368554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.368585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.368592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.372193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.372222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.372229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.375784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.375811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.375817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.379461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.379489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.379497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.383110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.383139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.383147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.386718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.386761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.386769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.390359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.390391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.390414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.394035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.394067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.394074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.397799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.397824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.236 [2024-12-09 10:59:58.397831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.236 [2024-12-09 10:59:58.401529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.236 [2024-12-09 10:59:58.401560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.237 [2024-12-09 10:59:58.401567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.237 [2024-12-09 10:59:58.405333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.237 [2024-12-09 10:59:58.405364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.237 [2024-12-09 10:59:58.405372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.512 [2024-12-09 10:59:58.409211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.409258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.409266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.412961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.412994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.413001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.416806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.416838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.416847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.420845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.420875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.420883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.424537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.424569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.424576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.428242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.428274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.431897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.431928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.431951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.435600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.435631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.435638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.439293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.439324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.439331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.442899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.442934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.442941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.446572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.446602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.446609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.450263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.450306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.450313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.453961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.453990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.453997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.457510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.457539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.457546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.461244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.461286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.461293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.464997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.465027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.465050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.468693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.468723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.468730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.472367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.472399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.472406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.476065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.476103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.476110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.479676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.479705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.479712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.483382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.483411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.483418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.487040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.487068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.487075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.490702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.490733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.490740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.494369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.494398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.494420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.497983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.498013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.498020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.501627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.501659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.501667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.505366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.513 [2024-12-09 10:59:58.505398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.513 [2024-12-09 10:59:58.505405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.513 [2024-12-09 10:59:58.508941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.508972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.508979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.512497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.512526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.512533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.516160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.516189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.516197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.519805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.519832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.519839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.523485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.523514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.523521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.527149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.527178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.527185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.530732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.530769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.530792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.534406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.534433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.534440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.538111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.538139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.541711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.541742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.541759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.545424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.545454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.545477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.549063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.549093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.549101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.552659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.552690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.552697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.556356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.556385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.556409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.559992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.560026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.560033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.563623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.563652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.563658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.567293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.567322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.567329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.570949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.570979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.570986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.574628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.574659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.574666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.578242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.578271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.578295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.581866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.581894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.581901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.585489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.585518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.585525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.589105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.589134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.589141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.592710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.592741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.592758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.596388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.596417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.596441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.599994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.600032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.600039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.603648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.603678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.603685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.514 [2024-12-09 10:59:58.607320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.514 [2024-12-09 10:59:58.607350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.514 [2024-12-09 10:59:58.607357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.610926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.610957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.610964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.614598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.614627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.614634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.618210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.618239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.618262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.621827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.621854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.621862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.625436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.625464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.625471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.628974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.629004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.629011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.632598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.632629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.632636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.636247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.636278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.636286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.639881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.639905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.639912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.643495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.643525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.643532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.647139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.647168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.647175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.650795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.650822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.650829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.654374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.654403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.654410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.657976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.658005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.658012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.661656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.661688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.661695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.665388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.665421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.665429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.669119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.669154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.669162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.672809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.672839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.672846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.676498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.676530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.676538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.680403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.680437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.680446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.515 [2024-12-09 10:59:58.684533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.515 [2024-12-09 10:59:58.684568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.515 [2024-12-09 10:59:58.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.689229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.689271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.689291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.693239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.693277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.693287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.697156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.697194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.697203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.700954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.700988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.700996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.704674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.704707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.704714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.708462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.708493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.708517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.712178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.712210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.712218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.715842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.715873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.715880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.719500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.719532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.794 [2024-12-09 10:59:58.719539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.794 [2024-12-09 10:59:58.723127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.794 [2024-12-09 10:59:58.723157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.723165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.726761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.726789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.726796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 8370.00 IOPS, 1046.25 MiB/s [2024-12-09T10:59:58.974Z] [2024-12-09 10:59:58.731268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.731302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.731310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.734925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.734955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.734962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.738624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.738652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.742286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.742314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.742322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.745973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.746004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.746011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.749495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.749526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.749533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.753159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.753209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.756766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.756794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.756801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.760406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.760437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.760444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.764062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.764092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.764099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.767736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.767774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.767781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.771404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.771434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.775057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.775087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.775094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.778759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.778785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.778792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.782418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.782446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.782453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.786015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.786043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.786050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.789597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.789626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.789633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.793206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.793235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.793243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.796920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.796951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.796959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.800551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.800581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.800589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.804231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.804271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.807888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.807927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.807935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.811518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.811548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.815149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.815179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.815186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.818736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.818772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.818779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.822432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.795 [2024-12-09 10:59:58.822461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.795 [2024-12-09 10:59:58.822468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.795 [2024-12-09 10:59:58.826141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.826169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.826176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.829832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.829859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.829866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.833575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.833606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.833613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.837293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.837324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.837333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.841288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.841318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.841327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.845016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.845061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.845069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.848896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.848927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.848935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.852696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.852727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.852734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.856460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.856490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.856497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.860176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.860209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.860218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.863783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.863809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.863816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.867453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.867481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.867488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.871287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.871315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.871323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.874975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.875005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.875012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.878635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.878666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.878673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.882345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.882376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.882383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.886179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.886209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.886217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.889844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.889872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.889879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.893555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.893584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.893591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.897255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.897288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.897296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.901009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.901044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.901053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.904972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.905011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.905021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.908797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.908833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.908841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.912492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.912523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.912547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.916316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.916349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.916357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.919946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.919977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.920000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.923670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.923702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.923710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.796 [2024-12-09 10:59:58.927373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.796 [2024-12-09 10:59:58.927403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.796 [2024-12-09 10:59:58.927410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.931167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.931198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.931205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.934827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.934854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.934861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.938540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.938571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.938579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.942290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.942320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.942328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.945985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.946014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.946021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.949610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.949638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.949645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.953311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.953342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.953349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.956913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.956943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.956951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.960575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.960606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.960614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.964206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.964237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.964244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.797 [2024-12-09 10:59:58.967902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:05.797 [2024-12-09 10:59:58.967937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:05.797 [2024-12-09 10:59:58.967945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.971540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.971571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.971578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.975220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.975250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.975257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.978941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.978971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.978994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.982607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.982636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.982643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.986216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.986245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.986252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.989770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.989816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.989822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.993402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.993434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:58.997042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:58.997074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:58.997082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.000645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.000675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.000682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.004336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.004368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.004376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.007964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.007993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.007999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.011612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.011641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.011648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.015284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.015313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.018916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.018947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.018970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.022572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.058 [2024-12-09 10:59:59.022603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.058 [2024-12-09 10:59:59.022610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.058 [2024-12-09 10:59:59.026505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.026539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.026546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.030116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.030147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.030170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.033786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.033822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.033829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.037408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.037437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.037445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.041159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.041188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.041194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.044723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.044764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.044771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.048387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.048417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.048424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.052000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.052036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.052059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.055588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.055617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.055639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.059292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.059323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.059330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.062925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.062955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.062978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.066587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.066616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.066623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.070353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.070381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.074018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.074046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.074053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.077644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.077672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.077679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.081291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.081320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.081327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.084889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.084919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.084927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.088474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.088505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.088512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.092107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.092137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.092144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.095628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.095656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.095663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.099300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.099328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.099334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.102912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.102940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.102947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.106541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.106568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.106574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.110211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.110239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.110246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.113826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.113858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.113865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.117456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.117487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.117494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.121064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.121094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.059 [2024-12-09 10:59:59.121101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.059 [2024-12-09 10:59:59.124676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.059 [2024-12-09 10:59:59.124706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.124713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.128320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.128350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.128357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.131839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.131866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.131873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.135409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.135436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.135443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.139005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.139035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.139042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.142605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.142635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.142641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.146181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.146211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.146218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.149814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.149840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.153400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.153428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.153435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.156993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.157024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.157031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.160500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.160531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.160538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.164073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.164102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.164124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.167628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.167657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.167663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.171223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.171253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.171260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.174685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.174714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.174736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.178247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.178275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.181846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.181873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.181880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.185502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.185530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.185536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.189146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.189174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.189181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.192710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.192740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.192758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.196359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.196389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.196396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.199872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.199899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.199905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.203385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.203413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.203436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.206950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.206978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.206985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.210563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.210590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.210597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.214213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.214241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.214248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.217859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.217897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.060 [2024-12-09 10:59:59.221479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.060 [2024-12-09 10:59:59.221509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.060 [2024-12-09 10:59:59.221532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.061 [2024-12-09 10:59:59.225088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.061 [2024-12-09 10:59:59.225120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.061 [2024-12-09 10:59:59.225127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.061 [2024-12-09 10:59:59.228648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.061 [2024-12-09 10:59:59.228678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.061 [2024-12-09 10:59:59.228685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.061 [2024-12-09 10:59:59.232288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.061 [2024-12-09 10:59:59.232319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.061 [2024-12-09 10:59:59.232327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.235972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.236001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.236009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.239604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.239634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.239641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.243300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.243331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.243338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.246939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.246971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.246994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.250541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.250572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.250596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.254214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.254245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.254268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.257779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.257813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.257836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.261395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.261426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.261432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.264974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.265004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.265011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.268543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.268573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.268580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.272168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.272199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.272206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.275769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.275797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.275822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.279379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.279410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.279434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.283008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.283039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.283063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.286610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.286639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.286646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.290251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.290282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.290289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.293827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.293855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.293862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.297450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.297478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.297486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.301101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.301132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.301139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.304670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.304700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.304708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.308313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.322 [2024-12-09 10:59:59.308343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.322 [2024-12-09 10:59:59.308350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.322 [2024-12-09 10:59:59.311984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.312022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.312029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.315592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.315620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.315627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.319272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.319300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.319307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.322989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.323017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.323024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.326574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.326602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.326609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.330215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.330243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.330250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.333804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.333833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.333856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.337434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.337464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.337471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.341126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.341157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.341164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.344677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.344708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.344715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.348231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.348261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.348268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.351876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.351904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.351912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.355554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.355582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.355590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.359268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.359298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.359305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.362968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.363000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.363007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.366602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.366634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.366641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.370246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.370277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.370284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.373875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.373903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.373910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.377475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.377504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.377527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.381089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.381120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.381127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.384609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.384641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.384648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.388403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.388437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.388444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.392090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.392122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.395784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.395813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.395836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.399422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.399455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.399462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.403160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.403191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.403198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.406814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.323 [2024-12-09 10:59:59.406841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.323 [2024-12-09 10:59:59.406848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.323 [2024-12-09 10:59:59.410494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.410526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.410548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.414196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.414227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.414234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.417903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.417931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.417938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.421584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.421613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.421620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.425397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.425427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.425434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.429074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.429105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.429113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.432735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.432776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.432784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.436319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.436349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.436372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.439963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.439993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.440000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.443560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.443590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.443597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.447231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.447261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.447267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.450949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.450978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.450984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.454590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.454618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.454626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.458256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.458285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.458292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.462024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.462053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.462059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.465707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.465757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.469365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.469396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.469403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.472988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.473017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.476552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.476581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.476589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.480191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.480221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.480228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.483835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.483862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.483869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.487436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.487464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.487471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.491047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.491075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.491098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.324 [2024-12-09 10:59:59.494727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.324 [2024-12-09 10:59:59.494767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.324 [2024-12-09 10:59:59.494775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.498468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.498500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.498508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.502228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.502259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.505929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.505960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.505967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.509632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.509661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.509668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.513356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.513386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.513392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.516988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.517019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.517027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.520582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.520613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.520620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.524232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.524264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.524272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.527898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.527935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.531621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.531651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.531674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.535259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.535290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.535297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.538882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.538910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.538916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.542483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.542513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.542520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.546153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.546181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.546188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.549776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.549802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.549809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.553436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.553465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.553473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.557082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.557114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.557122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.560704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.560735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.560742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.564352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.564382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.564405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.567928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.567958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.567965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.571500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.571527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.571534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.575159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.575187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.575193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.578725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.578778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.578785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.582385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.582413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.582419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.586 [2024-12-09 10:59:59.586080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.586 [2024-12-09 10:59:59.586107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.586 [2024-12-09 10:59:59.586113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.589647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.589677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.589684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.593271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.593301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.593308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.596869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.596900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.596907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.600496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.600526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.600533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.604139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.604168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.604175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.607662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.607690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.607697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.611210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.611238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.611245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.614770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.614797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.614804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.618436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.618463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.618469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.622061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.622091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.622098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.625754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.625805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.625812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.629376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.629405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.629411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.633022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.633053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.633060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.636655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.636685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.636692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.640256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.640286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.640293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.643790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.643818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.643825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.647337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.647367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.647374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.650870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.650898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.650905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.654371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.654405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.657988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.658016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.658023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.661633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.661661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.661668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.665244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.665273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.665280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.668868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.668897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.668905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.672442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.672472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.672479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.675992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.676028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.676035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.679512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.679541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.679547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.683194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.683221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.587 [2024-12-09 10:59:59.683228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.587 [2024-12-09 10:59:59.686765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.587 [2024-12-09 10:59:59.686790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.686796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.690349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.690377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.690384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.694027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.694058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.694065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.697717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.697756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.697764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.701334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.701365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.701373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.704954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.704984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.704992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.708652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.708684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.708691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.712338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.712371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.712378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.716252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.716284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.716308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.720340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.720373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.720382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:06.588 [2024-12-09 10:59:59.724246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.724276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.724284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:06.588 8416.50 IOPS, 1052.06 MiB/s [2024-12-09T10:59:59.767Z] [2024-12-09 10:59:59.729329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x266a620) 00:17:06.588 [2024-12-09 10:59:59.729363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.588 [2024-12-09 10:59:59.729372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:06.588 00:17:06.588 Latency(us) 00:17:06.588 [2024-12-09T10:59:59.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.588 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:06.588 nvme0n1 : 2.00 8414.56 1051.82 0.00 0.00 1898.93 1702.79 10016.42 00:17:06.588 [2024-12-09T10:59:59.767Z] =================================================================================================================== 00:17:06.588 [2024-12-09T10:59:59.767Z] Total : 8414.56 1051.82 0.00 0.00 1898.93 1702.79 10016.42 00:17:06.588 { 00:17:06.588 "results": [ 00:17:06.588 { 00:17:06.588 "job": "nvme0n1", 00:17:06.588 "core_mask": "0x2", 00:17:06.588 "workload": "randread", 00:17:06.588 "status": "finished", 00:17:06.588 "queue_depth": 16, 00:17:06.588 "io_size": 131072, 00:17:06.588 "runtime": 2.002362, 00:17:06.588 "iops": 8414.56240180347, 00:17:06.588 "mibps": 1051.8203002254338, 00:17:06.588 "io_failed": 0, 00:17:06.588 "io_timeout": 0, 00:17:06.588 "avg_latency_us": 1898.9332268303535, 00:17:06.588 "min_latency_us": 1702.7912663755458, 00:17:06.588 "max_latency_us": 10016.419213973799 00:17:06.588 } 00:17:06.588 ], 00:17:06.588 "core_count": 1 00:17:06.588 } 00:17:06.588 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:06.588 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:06.588 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:06.588 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:06.588 | .driver_specific 00:17:06.588 | .nvme_error 00:17:06.588 | .status_code 00:17:06.588 | .command_transient_transport_error' 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 544 > 0 )) 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80289 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80289 ']' 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80289 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80289 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80289' 00:17:06.848 killing process with pid 80289 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80289 00:17:06.848 Received shutdown signal, test time was about 2.000000 seconds 00:17:06.848 00:17:06.848 Latency(us) 00:17:06.848 [2024-12-09T11:00:00.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.848 [2024-12-09T11:00:00.027Z] =================================================================================================================== 00:17:06.848 [2024-12-09T11:00:00.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:06.848 10:59:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80289 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80349 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80349 /var/tmp/bperf.sock 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80349 ']' 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.108 11:00:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:07.108 [2024-12-09 11:00:00.246833] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:17:07.108 [2024-12-09 11:00:00.246894] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80349 ] 00:17:07.367 [2024-12-09 11:00:00.399688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.367 [2024-12-09 11:00:00.446169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.367 [2024-12-09 11:00:00.486915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:07.937 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.937 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:07.937 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:07.937 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:08.196 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:08.196 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.196 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:08.196 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.196 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:08.196 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:08.455 nvme0n1 00:17:08.455 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:08.455 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.455 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:08.455 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.455 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:08.455 11:00:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:08.715 Running I/O for 2 seconds... 00:17:08.715 [2024-12-09 11:00:01.698896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb048 00:17:08.715 [2024-12-09 11:00:01.700068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.700110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.711099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb8b8 00:17:08.715 [2024-12-09 11:00:01.712213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.712244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.723203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc128 00:17:08.715 [2024-12-09 11:00:01.724307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.724339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.735222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc998 00:17:08.715 [2024-12-09 11:00:01.736305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.736338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.747209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efd208 00:17:08.715 [2024-12-09 11:00:01.748277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.748308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.759511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efda78 00:17:08.715 [2024-12-09 11:00:01.760581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.760612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.772477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efe2e8 00:17:08.715 [2024-12-09 11:00:01.773652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.773681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.785226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efeb58 00:17:08.715 [2024-12-09 11:00:01.786256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.786284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.802288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efef90 00:17:08.715 [2024-12-09 11:00:01.804252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.804279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.814380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efeb58 00:17:08.715 [2024-12-09 11:00:01.816293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.816320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.826490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efe2e8 00:17:08.715 [2024-12-09 11:00:01.828496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.828524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.838714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efda78 00:17:08.715 [2024-12-09 11:00:01.840692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.840720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.850824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efd208 00:17:08.715 [2024-12-09 11:00:01.852681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.852709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.862830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc998 00:17:08.715 [2024-12-09 11:00:01.864698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.864727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.875156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc128 00:17:08.715 [2024-12-09 11:00:01.877083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.877112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:08.715 [2024-12-09 11:00:01.887444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb8b8 00:17:08.715 [2024-12-09 11:00:01.889414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.715 [2024-12-09 11:00:01.889444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.900660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb048 00:17:08.976 [2024-12-09 11:00:01.902751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.902789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.913583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efa7d8 00:17:08.976 [2024-12-09 11:00:01.915443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.915469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.925852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef9f68 00:17:08.976 [2024-12-09 11:00:01.927632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.927662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.938124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef96f8 00:17:08.976 [2024-12-09 11:00:01.939904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.939931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.950400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef8e88 00:17:08.976 [2024-12-09 11:00:01.952168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.952195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.962456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef8618 00:17:08.976 [2024-12-09 11:00:01.964258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.964285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.974737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef7da8 00:17:08.976 [2024-12-09 11:00:01.976559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.976587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.986833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef7538 00:17:08.976 [2024-12-09 11:00:01.988580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:01.988607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:01.998966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef6cc8 00:17:08.976 [2024-12-09 11:00:02.000728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.000762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.011159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef6458 00:17:08.976 [2024-12-09 11:00:02.012885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.012912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.023218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef5be8 00:17:08.976 [2024-12-09 11:00:02.024951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.024992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.035235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef5378 00:17:08.976 [2024-12-09 11:00:02.036976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.037004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.047259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef4b08 00:17:08.976 [2024-12-09 11:00:02.048940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.059296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef4298 00:17:08.976 [2024-12-09 11:00:02.060979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.061006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.071313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef3a28 00:17:08.976 [2024-12-09 11:00:02.073002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.073029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.083478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef31b8 00:17:08.976 [2024-12-09 11:00:02.085127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.095466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef2948 00:17:08.976 [2024-12-09 11:00:02.097032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.097060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.107503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef20d8 00:17:08.976 [2024-12-09 11:00:02.109128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.976 [2024-12-09 11:00:02.109155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:08.976 [2024-12-09 11:00:02.119595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef1868 00:17:08.977 [2024-12-09 11:00:02.121194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.977 [2024-12-09 11:00:02.121231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:08.977 [2024-12-09 11:00:02.131703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef0ff8 00:17:08.977 [2024-12-09 11:00:02.133297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.977 [2024-12-09 11:00:02.133325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:08.977 [2024-12-09 11:00:02.143642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef0788 00:17:08.977 [2024-12-09 11:00:02.145208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:08.977 [2024-12-09 11:00:02.145234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.155728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeff18 00:17:09.237 [2024-12-09 11:00:02.157308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.157336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.167890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eef6a8 00:17:09.237 [2024-12-09 11:00:02.169439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.169464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.179884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeee38 00:17:09.237 [2024-12-09 11:00:02.181416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.181442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.191753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eee5c8 00:17:09.237 [2024-12-09 11:00:02.193278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.193305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.203901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eedd58 00:17:09.237 [2024-12-09 11:00:02.205411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.205438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.216032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eed4e8 00:17:09.237 [2024-12-09 11:00:02.217520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.217544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.228183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eecc78 00:17:09.237 [2024-12-09 11:00:02.229641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.229668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.240645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eec408 00:17:09.237 [2024-12-09 11:00:02.242095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.242123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.253295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eebb98 00:17:09.237 [2024-12-09 11:00:02.254719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.254753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.265934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeb328 00:17:09.237 [2024-12-09 11:00:02.267358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.267388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.278139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeaab8 00:17:09.237 [2024-12-09 11:00:02.279520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.279550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.290373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eea248 00:17:09.237 [2024-12-09 11:00:02.291755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.291784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.302572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee99d8 00:17:09.237 [2024-12-09 11:00:02.303906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.303934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.315077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee9168 00:17:09.237 [2024-12-09 11:00:02.316511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.316542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.327702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee88f8 00:17:09.237 [2024-12-09 11:00:02.329043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.329071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.340084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee8088 00:17:09.237 [2024-12-09 11:00:02.341416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.341447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.352454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee7818 00:17:09.237 [2024-12-09 11:00:02.353764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.237 [2024-12-09 11:00:02.353793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:09.237 [2024-12-09 11:00:02.364685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee6fa8 00:17:09.238 [2024-12-09 11:00:02.365992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.238 [2024-12-09 11:00:02.366022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:09.238 [2024-12-09 11:00:02.376834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee6738 00:17:09.238 [2024-12-09 11:00:02.378070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.238 [2024-12-09 11:00:02.378099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:09.238 [2024-12-09 11:00:02.389026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee5ec8 00:17:09.238 [2024-12-09 11:00:02.390245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.238 [2024-12-09 11:00:02.390273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:09.238 [2024-12-09 11:00:02.401276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee5658 00:17:09.238 [2024-12-09 11:00:02.402464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.238 [2024-12-09 11:00:02.402494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:09.238 [2024-12-09 11:00:02.413396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee4de8 00:17:09.238 [2024-12-09 11:00:02.414657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.238 [2024-12-09 11:00:02.414688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.425754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee4578 00:17:09.498 [2024-12-09 11:00:02.426915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.426945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.437778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee3d08 00:17:09.498 [2024-12-09 11:00:02.438966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.438995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.449925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee3498 00:17:09.498 [2024-12-09 11:00:02.451048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.451079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.462369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee2c28 00:17:09.498 [2024-12-09 11:00:02.463526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.463558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.474540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee23b8 00:17:09.498 [2024-12-09 11:00:02.475683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.475714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.486629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee1b48 00:17:09.498 [2024-12-09 11:00:02.487710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.487738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.498557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee12d8 00:17:09.498 [2024-12-09 11:00:02.499620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.499649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.510528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee0a68 00:17:09.498 [2024-12-09 11:00:02.511623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.511652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.522597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee01f8 00:17:09.498 [2024-12-09 11:00:02.523633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.523661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.534549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016edf988 00:17:09.498 [2024-12-09 11:00:02.535570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.535600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.546462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016edf118 00:17:09.498 [2024-12-09 11:00:02.547469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.547499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.558417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ede8a8 00:17:09.498 [2024-12-09 11:00:02.559412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.559444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.570539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ede038 00:17:09.498 [2024-12-09 11:00:02.571540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.571570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.587674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ede038 00:17:09.498 [2024-12-09 11:00:02.589695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.589722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.599706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ede8a8 00:17:09.498 [2024-12-09 11:00:02.601685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.601714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.611647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016edf118 00:17:09.498 [2024-12-09 11:00:02.613630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.613662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.623602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016edf988 00:17:09.498 [2024-12-09 11:00:02.625554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.625581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.635559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee01f8 00:17:09.498 [2024-12-09 11:00:02.637510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.637536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.647480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee0a68 00:17:09.498 [2024-12-09 11:00:02.649410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.649440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.659510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee12d8 00:17:09.498 [2024-12-09 11:00:02.661470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.661499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:09.498 [2024-12-09 11:00:02.671537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee1b48 00:17:09.498 [2024-12-09 11:00:02.673462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.498 [2024-12-09 11:00:02.673489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.683889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee23b8 00:17:09.759 20622.00 IOPS, 80.55 MiB/s [2024-12-09T11:00:02.938Z] [2024-12-09 11:00:02.685800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.685822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.695993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee2c28 00:17:09.759 [2024-12-09 11:00:02.697869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.697897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.708110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee3498 00:17:09.759 [2024-12-09 11:00:02.709957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.709984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.720165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee3d08 00:17:09.759 [2024-12-09 11:00:02.721979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.722013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.732154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee4578 00:17:09.759 [2024-12-09 11:00:02.733970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.733993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.744194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee4de8 00:17:09.759 [2024-12-09 11:00:02.745959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.745983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.756174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee5658 00:17:09.759 [2024-12-09 11:00:02.757938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.757964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.768133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee5ec8 00:17:09.759 [2024-12-09 11:00:02.769878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.769915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.780323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee6738 00:17:09.759 [2024-12-09 11:00:02.782078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.782103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.792884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee6fa8 00:17:09.759 [2024-12-09 11:00:02.794778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.794801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.806388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee7818 00:17:09.759 [2024-12-09 11:00:02.808115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.808141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.818676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee8088 00:17:09.759 [2024-12-09 11:00:02.820460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.820486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.830867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee88f8 00:17:09.759 [2024-12-09 11:00:02.832560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.832587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.843114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee9168 00:17:09.759 [2024-12-09 11:00:02.844772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.844813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.855209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ee99d8 00:17:09.759 [2024-12-09 11:00:02.856943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.856972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.867650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eea248 00:17:09.759 [2024-12-09 11:00:02.869318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.869342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.879675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeaab8 00:17:09.759 [2024-12-09 11:00:02.881353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.881379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.892004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeb328 00:17:09.759 [2024-12-09 11:00:02.893651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.893676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.904159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eebb98 00:17:09.759 [2024-12-09 11:00:02.905761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.905790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.916191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eec408 00:17:09.759 [2024-12-09 11:00:02.917751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.759 [2024-12-09 11:00:02.917782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:09.759 [2024-12-09 11:00:02.928141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eecc78 00:17:09.760 [2024-12-09 11:00:02.929677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:09.760 [2024-12-09 11:00:02.929702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:02.940503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eed4e8 00:17:10.020 [2024-12-09 11:00:02.942056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:02.942081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:02.952796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eedd58 00:17:10.020 [2024-12-09 11:00:02.954314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:02.954339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:02.964994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eee5c8 00:17:10.020 [2024-12-09 11:00:02.966513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:02.966541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:02.977160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeee38 00:17:10.020 [2024-12-09 11:00:02.978594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:02.978626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:02.989151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eef6a8 00:17:10.020 [2024-12-09 11:00:02.990550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:02.990580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.001116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016eeff18 00:17:10.020 [2024-12-09 11:00:03.002500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.002530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.013045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef0788 00:17:10.020 [2024-12-09 11:00:03.014490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.014518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.025210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef0ff8 00:17:10.020 [2024-12-09 11:00:03.026639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.026668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.037274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef1868 00:17:10.020 [2024-12-09 11:00:03.038668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.038699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.049511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef20d8 00:17:10.020 [2024-12-09 11:00:03.050908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.050937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.061638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef2948 00:17:10.020 [2024-12-09 11:00:03.063022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.063052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.073723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef31b8 00:17:10.020 [2024-12-09 11:00:03.075088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.075120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.085752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef3a28 00:17:10.020 [2024-12-09 11:00:03.087131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.087161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.097929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef4298 00:17:10.020 [2024-12-09 11:00:03.099255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.099283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.109983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef4b08 00:17:10.020 [2024-12-09 11:00:03.111246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.111275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.121987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef5378 00:17:10.020 [2024-12-09 11:00:03.123270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.123299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.134115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef5be8 00:17:10.020 [2024-12-09 11:00:03.135351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.135397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.146128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef6458 00:17:10.020 [2024-12-09 11:00:03.147404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.147434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.158227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef6cc8 00:17:10.020 [2024-12-09 11:00:03.159477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.159507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.170326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef7538 00:17:10.020 [2024-12-09 11:00:03.171516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.171546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.182367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef7da8 00:17:10.020 [2024-12-09 11:00:03.183582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.183614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:10.020 [2024-12-09 11:00:03.194508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef8618 00:17:10.020 [2024-12-09 11:00:03.195747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.020 [2024-12-09 11:00:03.195782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:10.280 [2024-12-09 11:00:03.206878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef8e88 00:17:10.280 [2024-12-09 11:00:03.208008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.280 [2024-12-09 11:00:03.208045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:10.280 [2024-12-09 11:00:03.219220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef96f8 00:17:10.280 [2024-12-09 11:00:03.220381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.280 [2024-12-09 11:00:03.220412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:10.280 [2024-12-09 11:00:03.231298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef9f68 00:17:10.280 [2024-12-09 11:00:03.232464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.280 [2024-12-09 11:00:03.232493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:10.280 [2024-12-09 11:00:03.243358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efa7d8 00:17:10.280 [2024-12-09 11:00:03.244512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.280 [2024-12-09 11:00:03.244540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:10.280 [2024-12-09 11:00:03.255691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb048 00:17:10.280 [2024-12-09 11:00:03.256967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.280 [2024-12-09 11:00:03.257000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:10.280 [2024-12-09 11:00:03.268085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb8b8 00:17:10.280 [2024-12-09 11:00:03.269218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.269247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.280340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc128 00:17:10.281 [2024-12-09 11:00:03.281459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.281501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.292445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc998 00:17:10.281 [2024-12-09 11:00:03.293545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.293574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.304823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efd208 00:17:10.281 [2024-12-09 11:00:03.305901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.305930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.316970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efda78 00:17:10.281 [2024-12-09 11:00:03.318043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.318073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.329151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efe2e8 00:17:10.281 [2024-12-09 11:00:03.330183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.330213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.341287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efeb58 00:17:10.281 [2024-12-09 11:00:03.342270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.342299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.358145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efef90 00:17:10.281 [2024-12-09 11:00:03.360146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.360174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.370736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efeb58 00:17:10.281 [2024-12-09 11:00:03.372757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.372784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.383193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efe2e8 00:17:10.281 [2024-12-09 11:00:03.385337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.385366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.396266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efda78 00:17:10.281 [2024-12-09 11:00:03.398406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.398434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.409246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efd208 00:17:10.281 [2024-12-09 11:00:03.411221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.411247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.421789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc998 00:17:10.281 [2024-12-09 11:00:03.423723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.423753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.434110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efc128 00:17:10.281 [2024-12-09 11:00:03.435992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.436017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:10.281 [2024-12-09 11:00:03.446505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb8b8 00:17:10.281 [2024-12-09 11:00:03.448450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.281 [2024-12-09 11:00:03.448477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.459279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efb048 00:17:10.541 [2024-12-09 11:00:03.461347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.461375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.472036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016efa7d8 00:17:10.541 [2024-12-09 11:00:03.474033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.474062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.484789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef9f68 00:17:10.541 [2024-12-09 11:00:03.486697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.486722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.497318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef96f8 00:17:10.541 [2024-12-09 11:00:03.499177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.499204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.509799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef8e88 00:17:10.541 [2024-12-09 11:00:03.511625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.511650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.522190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef8618 00:17:10.541 [2024-12-09 11:00:03.524010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.524042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.534345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef7da8 00:17:10.541 [2024-12-09 11:00:03.536135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.536161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.546449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef7538 00:17:10.541 [2024-12-09 11:00:03.548249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.541 [2024-12-09 11:00:03.548276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:10.541 [2024-12-09 11:00:03.558692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef6cc8 00:17:10.542 [2024-12-09 11:00:03.560458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.560487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.571211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef6458 00:17:10.542 [2024-12-09 11:00:03.572950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.572993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.583536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef5be8 00:17:10.542 [2024-12-09 11:00:03.585287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.585312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.595681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef5378 00:17:10.542 [2024-12-09 11:00:03.597400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.597423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.607650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef4b08 00:17:10.542 [2024-12-09 11:00:03.609360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.609384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.619846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef4298 00:17:10.542 [2024-12-09 11:00:03.621538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.621563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.631957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef3a28 00:17:10.542 [2024-12-09 11:00:03.633624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.633649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.644008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef31b8 00:17:10.542 [2024-12-09 11:00:03.645663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.645688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.656115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef2948 00:17:10.542 [2024-12-09 11:00:03.657740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.657770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.668072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef20d8 00:17:10.542 [2024-12-09 11:00:03.669689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.669713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:10.542 [2024-12-09 11:00:03.680272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cb70) with pdu=0x200016ef1868 00:17:10.542 [2024-12-09 11:00:03.681902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.542 [2024-12-09 11:00:03.681926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:10.542 20684.00 IOPS, 80.80 MiB/s 00:17:10.542 Latency(us) 00:17:10.542 [2024-12-09T11:00:03.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.542 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.542 nvme0n1 : 2.01 20671.13 80.75 0.00 0.00 6187.11 3534.37 23467.04 00:17:10.542 [2024-12-09T11:00:03.721Z] =================================================================================================================== 00:17:10.542 [2024-12-09T11:00:03.721Z] Total : 20671.13 80.75 0.00 0.00 6187.11 3534.37 23467.04 00:17:10.542 { 00:17:10.542 "results": [ 00:17:10.542 { 00:17:10.542 "job": "nvme0n1", 00:17:10.542 "core_mask": "0x2", 00:17:10.542 "workload": "randwrite", 00:17:10.542 "status": "finished", 00:17:10.542 "queue_depth": 128, 00:17:10.542 "io_size": 4096, 00:17:10.542 "runtime": 2.007437, 00:17:10.542 "iops": 20671.13438678275, 00:17:10.542 "mibps": 80.74661869837011, 00:17:10.542 "io_failed": 0, 00:17:10.542 "io_timeout": 0, 00:17:10.542 "avg_latency_us": 6187.113203229774, 00:17:10.542 "min_latency_us": 3534.3650655021834, 00:17:10.542 "max_latency_us": 23467.039301310044 00:17:10.542 } 00:17:10.542 ], 00:17:10.542 "core_count": 1 00:17:10.542 } 00:17:10.542 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:10.802 | .driver_specific 00:17:10.802 | .nvme_error 00:17:10.802 | .status_code 00:17:10.802 | .command_transient_transport_error' 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80349 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80349 ']' 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80349 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80349 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:10.802 killing process with pid 80349 00:17:10.802 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80349' 00:17:10.802 Received shutdown signal, test time was about 2.000000 seconds 00:17:10.802 00:17:10.802 Latency(us) 00:17:10.802 [2024-12-09T11:00:03.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.802 [2024-12-09T11:00:03.982Z] =================================================================================================================== 00:17:10.803 [2024-12-09T11:00:03.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.803 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80349 00:17:10.803 11:00:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80349 00:17:11.062 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:11.062 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:11.062 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:11.062 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80404 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80404 /var/tmp/bperf.sock 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80404 ']' 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.063 11:00:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:11.063 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:11.063 Zero copy mechanism will not be used. 00:17:11.063 [2024-12-09 11:00:04.221291] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:17:11.063 [2024-12-09 11:00:04.221353] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80404 ] 00:17:11.322 [2024-12-09 11:00:04.370328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.322 [2024-12-09 11:00:04.417218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.322 [2024-12-09 11:00:04.458191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:12.261 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:12.521 nvme0n1 00:17:12.521 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:12.521 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.521 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:12.521 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.521 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:12.521 11:00:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:12.521 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:12.521 Zero copy mechanism will not be used. 00:17:12.521 Running I/O for 2 seconds... 00:17:12.521 [2024-12-09 11:00:05.680231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.521 [2024-12-09 11:00:05.680372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.521 [2024-12-09 11:00:05.680403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.521 [2024-12-09 11:00:05.685721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.521 [2024-12-09 11:00:05.685887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.521 [2024-12-09 11:00:05.685921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.521 [2024-12-09 11:00:05.690702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.521 [2024-12-09 11:00:05.690886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.521 [2024-12-09 11:00:05.690911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.521 [2024-12-09 11:00:05.695517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.521 [2024-12-09 11:00:05.695686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.521 [2024-12-09 11:00:05.695706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.700426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.700589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.700620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.705282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.705470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.705499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.710142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.710300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.710319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.714899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.715055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.715073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.719753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.719951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.719977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.724813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.725040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.725063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.729659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.729833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.729850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.734458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.734652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.734680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.739260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.739424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.739452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.744079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.744270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.744300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.749045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.749194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.749212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.754082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.754245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.754288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.759094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.759245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.759264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.763973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.764155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.764181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.768770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.768936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.782 [2024-12-09 11:00:05.768959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.782 [2024-12-09 11:00:05.773739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.782 [2024-12-09 11:00:05.773897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.773918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.778698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.778899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.783541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.783697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.783724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.788739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.788896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.788919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.793716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.793897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.793923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.798849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.799028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.799046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.803721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.803893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.808907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.809075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.809102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.813805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.813990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.814013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.818807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.818961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.818978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.823927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.824100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.824117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.828901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.829050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.829067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.833834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.834003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.834020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.838995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.839149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.839172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.843964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.844140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.844157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.848912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.849100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.849119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.853870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.854033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.854059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.859020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.859165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.859192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.864171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.864348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.864382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.869155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.869337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.869357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.874125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.874384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.874405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.879097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.879241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.879266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.884071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.884252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.884282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.889255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.889414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.889432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.894089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.894245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.894263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.899057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.899216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.899234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.904199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.904354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.904372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.909464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.783 [2024-12-09 11:00:05.909660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.783 [2024-12-09 11:00:05.909679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.783 [2024-12-09 11:00:05.914419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.914566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.914584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.919289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.919448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.919466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.924168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.924343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.924363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.929103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.929282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.929312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.934228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.934401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.934418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.939338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.939490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.939508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.944326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.944516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.944540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.949307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.949463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.949481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:12.784 [2024-12-09 11:00:05.954148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:12.784 [2024-12-09 11:00:05.954338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:12.784 [2024-12-09 11:00:05.954367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.959048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.959238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.959264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.964010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.964201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.964220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.968929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.969114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.969132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.973675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.973837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.978826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.978963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.978985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.984001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.984180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.984202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.989035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.989197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.989214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.994029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.994184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.994201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:05.998932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:05.999104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:05.999131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.003696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.003850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:06.003868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.008505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.008665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:06.008682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.013459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.013723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:06.013760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.018441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.018609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:06.018636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.023636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.023783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:06.023801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.028632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.028789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.046 [2024-12-09 11:00:06.028806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.046 [2024-12-09 11:00:06.033812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.046 [2024-12-09 11:00:06.034001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.034024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.038725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.038905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.043873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.044016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.044045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.048923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.049066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.049084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.054182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.054367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.054387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.058992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.059140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.059163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.063970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.064120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.064137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.068872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.069024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.069041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.073962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.074122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.074139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.078812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.078978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.078995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.083737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.083938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.083967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.088613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.088774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.088792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.093552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.093712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.093729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.098462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.098622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.098639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.103500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.103668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.103685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.108621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.108782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.108799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.113807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.113934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.113957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.118759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.118900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.118920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.123554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.123692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.123720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.128257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.128416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.128443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.133175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.133334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.133351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.138002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.138171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.138209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.142915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.143073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.143095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.147826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.147981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.147997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.152772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.152915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.152932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.157787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.157956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.157973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.162799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.162957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.162975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.167846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.168036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.047 [2024-12-09 11:00:06.168060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.047 [2024-12-09 11:00:06.172942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.047 [2024-12-09 11:00:06.173104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.173122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.177815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.177976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.177994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.183031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.183200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.183228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.188196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.188340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.188370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.193196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.193376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.193396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.198148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.198338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.198356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.203012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.203156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.203173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.208017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.208186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.208204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.212906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.213078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.213101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.048 [2024-12-09 11:00:06.218060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.048 [2024-12-09 11:00:06.218242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.048 [2024-12-09 11:00:06.218262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.309 [2024-12-09 11:00:06.223164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.309 [2024-12-09 11:00:06.223322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.309 [2024-12-09 11:00:06.223340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.309 [2024-12-09 11:00:06.228122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.309 [2024-12-09 11:00:06.228297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.309 [2024-12-09 11:00:06.228326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.309 [2024-12-09 11:00:06.233008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.309 [2024-12-09 11:00:06.233178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.309 [2024-12-09 11:00:06.233197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.309 [2024-12-09 11:00:06.238089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.238243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.238261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.243075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.243238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.243266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.247980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.248177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.248204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.253029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.253180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.253199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.258105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.258251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.258268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.263098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.263252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.263269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.267972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.268140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.268157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.272721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.272884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.272906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.277808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.277968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.277994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.283115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.283261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.283279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.288217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.288361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.288379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.293096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.293245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.293270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.297949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.298133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.298161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.302764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.302944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.302961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.307725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.307882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.307899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.312815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.312972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.312990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.318025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.318172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.318189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.323141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.323270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.323289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.327981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.328150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.328177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.332720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.332913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.332931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.337544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.337697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.337715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.342425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.342606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.342623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.347299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.347480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.347497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.352359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.352508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.352526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.357380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.357570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.362392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.362549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.362566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.367473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.310 [2024-12-09 11:00:06.367642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.310 [2024-12-09 11:00:06.367659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.310 [2024-12-09 11:00:06.372615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.372782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.372799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.377662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.377841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.377859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.382760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.382912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.382928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.387875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.388038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.388055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.393054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.393210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.393229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.398198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.398369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.398386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.403045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.403214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.403232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.407813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.407990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.408007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.412597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.412749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.412777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.417422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.417608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.417625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.422369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.422526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.422542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.427497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.427649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.427666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.432652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.432834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.432852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.437690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.437858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.437876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.442728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.442888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.442905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.448036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.448228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.453095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.453239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.453257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.458116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.458301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.458318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.463297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.463449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.463467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.468312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.468460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.468477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.472964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.473276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.473307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.478061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.478527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.478560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.311 [2024-12-09 11:00:06.483294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.311 [2024-12-09 11:00:06.483765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.311 [2024-12-09 11:00:06.483823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.488714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.489171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.489204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.494025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.494502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.494534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.499456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.499905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.499936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.504678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.505148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.505179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.510022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.510459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.510491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.515134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.515562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.515594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.520253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.520680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.520712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.525416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.525892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.530783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.531222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.531253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.536071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.536525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.536558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.541385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.541835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.541865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.546507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.546965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.546996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.551767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.552223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.552265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.556602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.557028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.557075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.561531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.561982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.562013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.566948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.567408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.567440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.572317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.572748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.572785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.577515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.577957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.577987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.582698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.583156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.583188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.587756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.588211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.588242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.592712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.593173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.593205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.597656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.598084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.598115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.602665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.603093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.603123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.607670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.608165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.608196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.612814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.613248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.613279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.617988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.618422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.573 [2024-12-09 11:00:06.618459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.573 [2024-12-09 11:00:06.623096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.573 [2024-12-09 11:00:06.623510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.623541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.627991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.628463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.628497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.632950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.633379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.633412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.637976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.638443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.638475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.642929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.643379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.643410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.648067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.648483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.648514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.653189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.653629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.653661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.658246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.658696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.658728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.663399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.663868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.663900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.668541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.668980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.669011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.574 6122.00 IOPS, 765.25 MiB/s [2024-12-09T11:00:06.753Z] [2024-12-09 11:00:06.674497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.674946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.674977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.679416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.679862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.679892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.684266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.684724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.684770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.689631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.690096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.690127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.694870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.695304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.695335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.700096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.700550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.700596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.705588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.706014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.706045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.710635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.711093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.711125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.715669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.716135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.716166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.720844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.721303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.721351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.725890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.726330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.726361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.730766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.731185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.731216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.735576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.736051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.736088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.740516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.740988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.574 [2024-12-09 11:00:06.745562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.574 [2024-12-09 11:00:06.746001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.574 [2024-12-09 11:00:06.746033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.750535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.750992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.751022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.755525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.755979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.756010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.760409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.760864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.760897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.765375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.765856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.765887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.770303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.770757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.770794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.775207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.775652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.775683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.780364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.780822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.780854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.785346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.785773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.785804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.790288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.790700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.790740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.795117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.795564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.795595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.800292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.800753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.800795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.805384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.805851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.805882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.810508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.810958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.810991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.815791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.816218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.816249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.820644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.821084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.821117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.825611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.836 [2024-12-09 11:00:06.826061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.836 [2024-12-09 11:00:06.826092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.836 [2024-12-09 11:00:06.830647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.831116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.831148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.835747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.836226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.836255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.840667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.841152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.841188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.845643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.846119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.846150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.850618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.851087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.851119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.855511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.855962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.855992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.860430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.860885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.860917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.865330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.865779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.870406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.870864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.870896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.875464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.875913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.875943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.880412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.880855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.880888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.885423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.885868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.885898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.890293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.890731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.890774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.895217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.895683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.895718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.900276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.900724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.900771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.905429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.905871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.905902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.910439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.910906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.910936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.915442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.915911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.915942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.920426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.920899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.920933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.925858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.926301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.926331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.930762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.931221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.931251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.935689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.936173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.936204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.940948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.941385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.941434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.946048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.946483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.946514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.951196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.951633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.951665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.956341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.956823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.956854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.961403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.837 [2024-12-09 11:00:06.961837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.837 [2024-12-09 11:00:06.961869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.837 [2024-12-09 11:00:06.966338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.966791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.966819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:06.971253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.971684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.971715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:06.976245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.976669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.976700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:06.981193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.981624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.981652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:06.986148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.986602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.986634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:06.991092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.991538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.991570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:06.996081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:06.996533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:06.996565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:07.001465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:07.001911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:07.001942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:07.006611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:07.007037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:07.007067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:13.838 [2024-12-09 11:00:07.011812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:13.838 [2024-12-09 11:00:07.012280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:13.838 [2024-12-09 11:00:07.012312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.017071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.017606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.017640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.022406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.022857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.022888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.027657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.028136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.028167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.032740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.033210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.033241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.037867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.038306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.038337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.043130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.043569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.043601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.048401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.048871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.048902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.053587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.054020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.054056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.058447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.058891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.058921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.063299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.063739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.063782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.068285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.068719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.068776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.073413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.073845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.073878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.078469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.078923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.078954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.083490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.083939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.083969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.088599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.089037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.089067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.093648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.094097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.094127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.098720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.099169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.099198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.103978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.104447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.104480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.109280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.109759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.109802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.114465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.114909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.114939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.119503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.119945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.119976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.099 [2024-12-09 11:00:07.124691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.099 [2024-12-09 11:00:07.125172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.099 [2024-12-09 11:00:07.125204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.129835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.130274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.130305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.134981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.135435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.135466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.140093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.140551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.145160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.145582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.145613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.150081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.150518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.150548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.155286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.155748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.155787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.160361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.160834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.160866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.165441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.165868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.165897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.170663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.171085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.171116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.176061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.176501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.176549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.181365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.181808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.181839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.186544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.187000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.187031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.191790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.192219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.192250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.197009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.197451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.197482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.201954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.202400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.202430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.207423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.207869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.207899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.212706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.213163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.213194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.218073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.218523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.218556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.223492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.223944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.223974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.228677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.229154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.229188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.233717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.234176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.234208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.238767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.239191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.239222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.243773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.244223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.244254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.249179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.249618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.249649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.254467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.254916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.254946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.259858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.260335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.260366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.100 [2024-12-09 11:00:07.265103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.100 [2024-12-09 11:00:07.265535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.100 [2024-12-09 11:00:07.265567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.101 [2024-12-09 11:00:07.270043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.101 [2024-12-09 11:00:07.270483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.101 [2024-12-09 11:00:07.270514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.101 [2024-12-09 11:00:07.275030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.101 [2024-12-09 11:00:07.275522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.101 [2024-12-09 11:00:07.275570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.280139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.280580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.280613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.285214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.285659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.285693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.290194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.290647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.290678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.295176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.295647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.295678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.300235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.300700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.300731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.305185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.305609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.305643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.310048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.310498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.310529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.315089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.315514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.315548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.320211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.320667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.320698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.325365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.325792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.325820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.330546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.330976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.331009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.335635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.336078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.336127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.340792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.341226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.341259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.345704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.346169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.346200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.350703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.351168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.351199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.355683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.356172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.356204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.360665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.361102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.361132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.365640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.366063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.366093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.370702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.371133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.371165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.375866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.376304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.376335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.381255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.381712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.381753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.386372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.386830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.386867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.391584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.362 [2024-12-09 11:00:07.392016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.362 [2024-12-09 11:00:07.392072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.362 [2024-12-09 11:00:07.396997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.397462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.397495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.402127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.402565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.402596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.407303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.407747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.407784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.412251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.412724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.412772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.417213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.417681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.422217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.422695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.422739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.427407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.427860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.432401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.432879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.432910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.437754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.438249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.443047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.443495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.443526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.448480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.448941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.448973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.453625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.454124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.454155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.458817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.459237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.459268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.464102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.464552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.464582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.469444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.469897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.469928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.474919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.475364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.475395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.480190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.480630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.480662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.485403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.485856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.485888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.490716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.491186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.491218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.496103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.496561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.496593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.501407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.501864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.501896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.506802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.507258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.507289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.512139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.512578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.512609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.517366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.517831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.517861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.522676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.523129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.523159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.527935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.528372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.528402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.363 [2024-12-09 11:00:07.533368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.363 [2024-12-09 11:00:07.533831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.363 [2024-12-09 11:00:07.533862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.538971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.539445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.539480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.544361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.544828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.544861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.549713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.550177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.550209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.555013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.555472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.555503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.560409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.560859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.560891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.565688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.566162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.566194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.571085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.571523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.571554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.576388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.576837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.576867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.581428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.581853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.581883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.586682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.587168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.587199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.591723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.592197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.592227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.596821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.597279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.597311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.602043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.602502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.607098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.607539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.607570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.612107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.612516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.612547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.617064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.617530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.617561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.622192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.622625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.622657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.627325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.627770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.627800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.632390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.632829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.632861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.637503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.637936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.637966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.642583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.643034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.643067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.647942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.648422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.648454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.653206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.653645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.653676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.658321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.625 [2024-12-09 11:00:07.658758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.625 [2024-12-09 11:00:07.658798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:14.625 [2024-12-09 11:00:07.663424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.626 [2024-12-09 11:00:07.663863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.626 [2024-12-09 11:00:07.663893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:14.626 [2024-12-09 11:00:07.668531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e5cd10) with pdu=0x200016eff3c8 00:17:14.626 [2024-12-09 11:00:07.668974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.626 [2024-12-09 11:00:07.669005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:14.626 6076.00 IOPS, 759.50 MiB/s 00:17:14.626 Latency(us) 00:17:14.626 [2024-12-09T11:00:07.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.626 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:14.626 nvme0n1 : 2.00 6074.68 759.33 0.00 0.00 2629.62 1760.03 10588.79 00:17:14.626 [2024-12-09T11:00:07.805Z] =================================================================================================================== 00:17:14.626 [2024-12-09T11:00:07.805Z] Total : 6074.68 759.33 0.00 0.00 2629.62 1760.03 10588.79 00:17:14.626 { 00:17:14.626 "results": [ 00:17:14.626 { 00:17:14.626 "job": "nvme0n1", 00:17:14.626 "core_mask": "0x2", 00:17:14.626 "workload": "randwrite", 00:17:14.626 "status": "finished", 00:17:14.626 "queue_depth": 16, 00:17:14.626 "io_size": 131072, 00:17:14.626 "runtime": 2.003728, 00:17:14.626 "iops": 6074.676802440252, 00:17:14.626 "mibps": 759.3346003050315, 00:17:14.626 "io_failed": 0, 00:17:14.626 "io_timeout": 0, 00:17:14.626 "avg_latency_us": 2629.6224542833647, 00:17:14.626 "min_latency_us": 1760.0279475982534, 00:17:14.626 "max_latency_us": 10588.786026200873 00:17:14.626 } 00:17:14.626 ], 00:17:14.626 "core_count": 1 00:17:14.626 } 00:17:14.626 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:14.626 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:14.626 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:14.626 | .driver_specific 00:17:14.626 | .nvme_error 00:17:14.626 | .status_code 00:17:14.626 | .command_transient_transport_error' 00:17:14.626 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 393 > 0 )) 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80404 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80404 ']' 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80404 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80404 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:14.886 killing process with pid 80404 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80404' 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80404 00:17:14.886 Received shutdown signal, test time was about 2.000000 seconds 00:17:14.886 00:17:14.886 Latency(us) 00:17:14.886 [2024-12-09T11:00:08.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.886 [2024-12-09T11:00:08.065Z] =================================================================================================================== 00:17:14.886 [2024-12-09T11:00:08.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.886 11:00:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80404 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80202 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80202 ']' 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80202 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80202 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.147 killing process with pid 80202 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80202' 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80202 00:17:15.147 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80202 00:17:15.407 00:17:15.407 real 0m17.277s 00:17:15.407 user 0m32.329s 00:17:15.407 sys 0m4.765s 00:17:15.407 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.407 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:15.407 ************************************ 00:17:15.407 END TEST nvmf_digest_error 00:17:15.407 ************************************ 00:17:15.407 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:15.407 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:15.407 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:15.407 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:15.667 rmmod nvme_tcp 00:17:15.667 rmmod nvme_fabrics 00:17:15.667 rmmod nvme_keyring 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80202 ']' 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80202 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80202 ']' 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80202 00:17:15.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80202) - No such process 00:17:15.667 Process with pid 80202 is not found 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80202 is not found' 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:15.667 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.927 11:00:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:15.927 00:17:15.927 real 0m35.864s 00:17:15.927 user 1m5.526s 00:17:15.927 sys 0m9.794s 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:15.927 ************************************ 00:17:15.927 END TEST nvmf_digest 00:17:15.927 ************************************ 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.927 ************************************ 00:17:15.927 START TEST nvmf_host_multipath 00:17:15.927 ************************************ 00:17:15.927 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:16.188 * Looking for test storage... 00:17:16.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.188 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.188 --rc genhtml_branch_coverage=1 00:17:16.188 --rc genhtml_function_coverage=1 00:17:16.188 --rc genhtml_legend=1 00:17:16.189 --rc geninfo_all_blocks=1 00:17:16.189 --rc geninfo_unexecuted_blocks=1 00:17:16.189 00:17:16.189 ' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.189 --rc genhtml_branch_coverage=1 00:17:16.189 --rc genhtml_function_coverage=1 00:17:16.189 --rc genhtml_legend=1 00:17:16.189 --rc geninfo_all_blocks=1 00:17:16.189 --rc geninfo_unexecuted_blocks=1 00:17:16.189 00:17:16.189 ' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.189 --rc genhtml_branch_coverage=1 00:17:16.189 --rc genhtml_function_coverage=1 00:17:16.189 --rc genhtml_legend=1 00:17:16.189 --rc geninfo_all_blocks=1 00:17:16.189 --rc geninfo_unexecuted_blocks=1 00:17:16.189 00:17:16.189 ' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.189 --rc genhtml_branch_coverage=1 00:17:16.189 --rc genhtml_function_coverage=1 00:17:16.189 --rc genhtml_legend=1 00:17:16.189 --rc geninfo_all_blocks=1 00:17:16.189 --rc geninfo_unexecuted_blocks=1 00:17:16.189 00:17:16.189 ' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:16.189 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:16.189 Cannot find device "nvmf_init_br" 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:16.189 Cannot find device "nvmf_init_br2" 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:16.189 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:16.189 Cannot find device "nvmf_tgt_br" 00:17:16.190 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:16.190 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.448 Cannot find device "nvmf_tgt_br2" 00:17:16.448 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:16.448 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:16.448 Cannot find device "nvmf_init_br" 00:17:16.448 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:16.448 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:16.449 Cannot find device "nvmf_init_br2" 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:16.449 Cannot find device "nvmf_tgt_br" 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:16.449 Cannot find device "nvmf_tgt_br2" 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:16.449 Cannot find device "nvmf_br" 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:16.449 Cannot find device "nvmf_init_if" 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:16.449 Cannot find device "nvmf_init_if2" 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.449 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:16.449 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:16.709 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.709 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:16.709 00:17:16.709 --- 10.0.0.3 ping statistics --- 00:17:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.709 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:16.709 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:16.709 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:17:16.709 00:17:16.709 --- 10.0.0.4 ping statistics --- 00:17:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.709 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:16.709 00:17:16.709 --- 10.0.0.1 ping statistics --- 00:17:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.709 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:16.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:17:16.709 00:17:16.709 --- 10.0.0.2 ping statistics --- 00:17:16.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.709 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:17:16.709 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80726 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80726 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80726 ']' 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.710 11:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:16.710 [2024-12-09 11:00:09.789576] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:17:16.710 [2024-12-09 11:00:09.789754] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.969 [2024-12-09 11:00:09.922021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.969 [2024-12-09 11:00:09.973987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.969 [2024-12-09 11:00:09.974027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.969 [2024-12-09 11:00:09.974034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.969 [2024-12-09 11:00:09.974039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.969 [2024-12-09 11:00:09.974043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.969 [2024-12-09 11:00:09.974889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.969 [2024-12-09 11:00:09.974895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.969 [2024-12-09 11:00:10.016924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80726 00:17:17.543 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:17.802 [2024-12-09 11:00:10.886215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.802 11:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:18.062 Malloc0 00:17:18.062 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:18.321 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.581 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:18.581 [2024-12-09 11:00:11.727260] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.581 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:18.840 [2024-12-09 11:00:11.930954] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80776 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80776 /var/tmp/bdevperf.sock 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 80776 ']' 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.840 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.841 11:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:19.863 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.863 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:17:19.863 11:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:19.864 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:20.122 Nvme0n1 00:17:20.122 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:20.380 Nvme0n1 00:17:20.637 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:20.637 11:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:21.574 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:21.574 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:21.833 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:21.833 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:21.833 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:21.833 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80821 00:17:21.833 11:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:28.397 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:28.397 11:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.397 Attaching 4 probes... 00:17:28.397 @path[10.0.0.3, 4421]: 17586 00:17:28.397 @path[10.0.0.3, 4421]: 17423 00:17:28.397 @path[10.0.0.3, 4421]: 17562 00:17:28.397 @path[10.0.0.3, 4421]: 17570 00:17:28.397 @path[10.0.0.3, 4421]: 17655 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80821 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:28.397 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:28.656 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:28.656 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:28.656 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80935 00:17:28.656 11:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.219 Attaching 4 probes... 00:17:35.219 @path[10.0.0.3, 4420]: 20180 00:17:35.219 @path[10.0.0.3, 4420]: 20328 00:17:35.219 @path[10.0.0.3, 4420]: 20184 00:17:35.219 @path[10.0.0.3, 4420]: 20348 00:17:35.219 @path[10.0.0.3, 4420]: 20282 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80935 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:35.219 11:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:35.219 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:35.219 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:35.219 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81047 00:17:35.219 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:35.219 11:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:41.785 Attaching 4 probes... 00:17:41.785 @path[10.0.0.3, 4421]: 12352 00:17:41.785 @path[10.0.0.3, 4421]: 17439 00:17:41.785 @path[10.0.0.3, 4421]: 17690 00:17:41.785 @path[10.0.0.3, 4421]: 18520 00:17:41.785 @path[10.0.0.3, 4421]: 17464 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81047 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81165 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:41.785 11:00:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:48.358 11:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:48.358 11:00:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:48.358 Attaching 4 probes... 00:17:48.358 00:17:48.358 00:17:48.358 00:17:48.358 00:17:48.358 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81165 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81277 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:48.358 11:00:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.922 Attaching 4 probes... 00:17:54.922 @path[10.0.0.3, 4421]: 17320 00:17:54.922 @path[10.0.0.3, 4421]: 17468 00:17:54.922 @path[10.0.0.3, 4421]: 17426 00:17:54.922 @path[10.0.0.3, 4421]: 17320 00:17:54.922 @path[10.0.0.3, 4421]: 17589 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81277 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:54.922 11:00:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:17:55.857 11:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:55.857 11:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81395 00:17:55.857 11:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:55.857 11:00:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:02.422 11:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:02.422 11:00:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.422 Attaching 4 probes... 00:18:02.422 @path[10.0.0.3, 4420]: 18913 00:18:02.422 @path[10.0.0.3, 4420]: 18576 00:18:02.422 @path[10.0.0.3, 4420]: 17812 00:18:02.422 @path[10.0.0.3, 4420]: 18536 00:18:02.422 @path[10.0.0.3, 4420]: 18040 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81395 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:02.422 [2024-12-09 11:00:55.361070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:02.422 11:00:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:08.983 11:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:08.983 11:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81575 00:18:08.983 11:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80726 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:08.983 11:01:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.577 Attaching 4 probes... 00:18:15.577 @path[10.0.0.3, 4421]: 18232 00:18:15.577 @path[10.0.0.3, 4421]: 18546 00:18:15.577 @path[10.0.0.3, 4421]: 18175 00:18:15.577 @path[10.0.0.3, 4421]: 18200 00:18:15.577 @path[10.0.0.3, 4421]: 18288 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81575 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80776 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80776 ']' 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80776 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80776 00:18:15.577 killing process with pid 80776 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80776' 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80776 00:18:15.577 11:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80776 00:18:15.577 { 00:18:15.577 "results": [ 00:18:15.577 { 00:18:15.577 "job": "Nvme0n1", 00:18:15.577 "core_mask": "0x4", 00:18:15.577 "workload": "verify", 00:18:15.577 "status": "terminated", 00:18:15.577 "verify_range": { 00:18:15.577 "start": 0, 00:18:15.577 "length": 16384 00:18:15.577 }, 00:18:15.577 "queue_depth": 128, 00:18:15.577 "io_size": 4096, 00:18:15.577 "runtime": 54.293324, 00:18:15.577 "iops": 7806.079436212084, 00:18:15.577 "mibps": 30.492497797703454, 00:18:15.577 "io_failed": 0, 00:18:15.577 "io_timeout": 0, 00:18:15.577 "avg_latency_us": 16379.367889846264, 00:18:15.577 "min_latency_us": 568.789519650655, 00:18:15.577 "max_latency_us": 7033243.388646288 00:18:15.577 } 00:18:15.577 ], 00:18:15.577 "core_count": 1 00:18:15.577 } 00:18:15.577 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80776 00:18:15.577 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:15.577 [2024-12-09 11:00:11.977343] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:18:15.577 [2024-12-09 11:00:11.977418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80776 ] 00:18:15.577 [2024-12-09 11:00:12.130346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.577 [2024-12-09 11:00:12.179584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.577 [2024-12-09 11:00:12.219655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:15.577 Running I/O for 90 seconds... 00:18:15.577 8847.00 IOPS, 34.56 MiB/s [2024-12-09T11:01:08.756Z] 8975.00 IOPS, 35.06 MiB/s [2024-12-09T11:01:08.756Z] 8972.67 IOPS, 35.05 MiB/s [2024-12-09T11:01:08.756Z] 8901.50 IOPS, 34.77 MiB/s [2024-12-09T11:01:08.756Z] 8870.00 IOPS, 34.65 MiB/s [2024-12-09T11:01:08.756Z] 8857.00 IOPS, 34.60 MiB/s [2024-12-09T11:01:08.756Z] 8849.43 IOPS, 34.57 MiB/s [2024-12-09T11:01:08.756Z] [2024-12-09 11:00:21.578373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.577 [2024-12-09 11:00:21.578656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.577 [2024-12-09 11:00:21.578679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.577 [2024-12-09 11:00:21.578723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.577 [2024-12-09 11:00:21.578737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.577 [2024-12-09 11:00:21.578746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.578770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.578779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.578793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.578802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.578816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.578824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.578838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.578847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.578861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.578869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.579989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.579998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.580020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.580074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.580097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.580274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.578 [2024-12-09 11:00:21.580283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.581155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.581174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.578 [2024-12-09 11:00:21.581191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.578 [2024-12-09 11:00:21.581200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:35576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:35584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.581525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:35104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.581710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.579 [2024-12-09 11:00:21.581719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:35632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.582981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.582989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.583004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.583013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.583034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.583043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.583056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.583065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.583079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.583095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.583109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.583117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.583131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.583140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.584973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.579 [2024-12-09 11:00:21.584993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.579 [2024-12-09 11:00:21.585010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:35208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.580 [2024-12-09 11:00:21.585446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.585528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.585536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:21.586359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:21.586369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.580 8846.12 IOPS, 34.56 MiB/s [2024-12-09T11:01:08.759Z] 8979.67 IOPS, 35.08 MiB/s [2024-12-09T11:01:08.759Z] 9100.90 IOPS, 35.55 MiB/s [2024-12-09T11:01:08.759Z] 9185.55 IOPS, 35.88 MiB/s [2024-12-09T11:01:08.759Z] 9268.08 IOPS, 36.20 MiB/s [2024-12-09T11:01:08.759Z] 9337.92 IOPS, 36.48 MiB/s [2024-12-09T11:01:08.759Z] 9393.21 IOPS, 36.69 MiB/s [2024-12-09T11:01:08.759Z] [2024-12-09 11:00:28.007976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:28.008027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:28.008073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:28.008082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:28.008113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:28.008121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:28.008135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.580 [2024-12-09 11:00:28.008144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.580 [2024-12-09 11:00:28.008158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.581 [2024-12-09 11:00:28.008805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.581 [2024-12-09 11:00:28.008922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.581 [2024-12-09 11:00:28.008930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.008944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.008953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.008967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.008975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.008989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.582 [2024-12-09 11:00:28.009363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.582 [2024-12-09 11:00:28.009856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.582 [2024-12-09 11:00:28.009870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.009879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.009892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.009901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.009915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.009924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.009937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.009946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.009959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.009968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.009983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.009991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.010017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.010040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.010979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.010988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.583 [2024-12-09 11:00:28.011359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.011382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.011405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.011427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.583 [2024-12-09 11:00:28.011449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.583 [2024-12-09 11:00:28.011463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.011987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.011996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.012010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.012018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.012040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.012049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.012079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.012088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.012102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.012111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.584 [2024-12-09 11:00:28.013159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.584 [2024-12-09 11:00:28.013443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.584 [2024-12-09 11:00:28.013452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.013478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.013499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.013521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.013544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.013972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.013988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.585 [2024-12-09 11:00:28.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.585 [2024-12-09 11:00:28.014615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.585 [2024-12-09 11:00:28.014624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.014637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.014646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.014659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.014668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.014682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.014691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.014981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.014996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.586 [2024-12-09 11:00:28.015901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.015981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.015990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.016003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.016012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.016025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.586 [2024-12-09 11:00:28.016048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.586 [2024-12-09 11:00:28.016078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.587 [2024-12-09 11:00:28.016289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.016443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.016452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.587 [2024-12-09 11:00:28.017717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.587 [2024-12-09 11:00:28.017726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.017747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.017794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.017985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.017998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.588 [2024-12-09 11:00:28.018536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.588 [2024-12-09 11:00:28.018662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.588 [2024-12-09 11:00:28.018671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.018694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.018716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.018980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.018994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.589 [2024-12-09 11:00:28.019383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.019444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.019453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.020160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.020185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.020208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.020231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.589 [2024-12-09 11:00:28.020276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.589 [2024-12-09 11:00:28.020290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.590 [2024-12-09 11:00:28.020720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.020855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.020864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.590 [2024-12-09 11:00:28.021410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.590 [2024-12-09 11:00:28.021419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.021790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.021991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.021999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.591 [2024-12-09 11:00:28.022164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.591 [2024-12-09 11:00:28.022357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.591 [2024-12-09 11:00:28.022365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.022724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.022955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.022963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.592 [2024-12-09 11:00:28.024366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.024389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.024411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.024441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.592 [2024-12-09 11:00:28.024464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.592 [2024-12-09 11:00:28.024478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.024487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.024509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.024532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.024554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.024981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.024990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.025012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.025035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.025057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.025079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.593 [2024-12-09 11:00:28.025442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.025456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.025465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.593 [2024-12-09 11:00:28.026481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.593 [2024-12-09 11:00:28.026495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.026982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.026991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.027024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.594 [2024-12-09 11:00:28.027046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.594 [2024-12-09 11:00:28.027269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.594 [2024-12-09 11:00:28.027282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.595 [2024-12-09 11:00:28.027943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.027956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.027965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.595 [2024-12-09 11:00:28.028227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.595 [2024-12-09 11:00:28.028241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.028642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.028752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.028769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.596 [2024-12-09 11:00:28.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.596 [2024-12-09 11:00:28.029861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.596 [2024-12-09 11:00:28.029875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.029883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.029897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.029906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.029919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.029928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.029942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.029955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.029969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.029978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.029991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.030000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.030947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.030965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.030993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.597 [2024-12-09 11:00:28.031713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.031736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.597 [2024-12-09 11:00:28.031767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.597 [2024-12-09 11:00:28.031788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:31704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.031980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.031994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:32200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:32208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:32232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.598 [2024-12-09 11:00:28.032702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.598 [2024-12-09 11:00:28.032749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.598 [2024-12-09 11:00:28.032771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:32304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.032988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.032996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.599 [2024-12-09 11:00:28.033397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:28.033514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:28.033522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.599 9001.67 IOPS, 35.16 MiB/s [2024-12-09T11:01:08.778Z] 8775.50 IOPS, 34.28 MiB/s [2024-12-09T11:01:08.778Z] 8770.24 IOPS, 34.26 MiB/s [2024-12-09T11:01:08.778Z] 8770.06 IOPS, 34.26 MiB/s [2024-12-09T11:01:08.778Z] 8799.42 IOPS, 34.37 MiB/s [2024-12-09T11:01:08.778Z] 8794.65 IOPS, 34.35 MiB/s [2024-12-09T11:01:08.778Z] 8790.33 IOPS, 34.34 MiB/s [2024-12-09T11:01:08.778Z] [2024-12-09 11:00:34.861064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:34.861112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:34.861152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:34.861182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:15.599 [2024-12-09 11:00:34.861197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.599 [2024-12-09 11:00:34.861206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.861870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.861983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.861991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.862013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.862036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.862058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.862080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.862107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.600 [2024-12-09 11:00:34.862129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.862151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.862173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.600 [2024-12-09 11:00:34.862187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.600 [2024-12-09 11:00:34.862196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.862921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.862987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.862995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.863311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.601 [2024-12-09 11:00:34.863320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.864007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.864020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.864051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.864077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:15.601 [2024-12-09 11:00:34.864094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.601 [2024-12-09 11:00:34.864103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.602 [2024-12-09 11:00:34.864808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.864973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.864982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.602 [2024-12-09 11:00:34.865166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:15.602 [2024-12-09 11:00:34.865183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:34.865192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:34.865220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:34.865228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:34.865250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:34.865259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:15.603 8475.14 IOPS, 33.11 MiB/s [2024-12-09T11:01:08.782Z] 8106.65 IOPS, 31.67 MiB/s [2024-12-09T11:01:08.782Z] 7768.88 IOPS, 30.35 MiB/s [2024-12-09T11:01:08.782Z] 7458.12 IOPS, 29.13 MiB/s [2024-12-09T11:01:08.782Z] 7171.27 IOPS, 28.01 MiB/s [2024-12-09T11:01:08.782Z] 6905.67 IOPS, 26.98 MiB/s [2024-12-09T11:01:08.782Z] 6659.04 IOPS, 26.01 MiB/s [2024-12-09T11:01:08.782Z] 6661.72 IOPS, 26.02 MiB/s [2024-12-09T11:01:08.782Z] 6731.03 IOPS, 26.29 MiB/s [2024-12-09T11:01:08.782Z] 6796.10 IOPS, 26.55 MiB/s [2024-12-09T11:01:08.782Z] 6853.47 IOPS, 26.77 MiB/s [2024-12-09T11:01:08.782Z] 6910.85 IOPS, 27.00 MiB/s [2024-12-09T11:01:08.782Z] 6963.15 IOPS, 27.20 MiB/s [2024-12-09T11:01:08.782Z] [2024-12-09 11:00:47.915159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.603 [2024-12-09 11:00:47.915733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.603 [2024-12-09 11:00:47.915855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.603 [2024-12-09 11:00:47.915863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.915985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.915994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.604 [2024-12-09 11:00:47.916480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.604 [2024-12-09 11:00:47.916615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.604 [2024-12-09 11:00:47.916624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.916640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.916664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.916682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.916700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.916718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.916735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.916990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.916997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.917016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.917033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:15.605 [2024-12-09 11:00:47.917334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.917356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.605 [2024-12-09 11:00:47.917368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.605 [2024-12-09 11:00:47.917376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.606 [2024-12-09 11:00:47.917393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.606 [2024-12-09 11:00:47.917412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.606 [2024-12-09 11:00:47.917430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.606 [2024-12-09 11:00:47.917458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.606 [2024-12-09 11:00:47.917475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb8290 is same with the state(6) to be set 00:18:15.606 [2024-12-09 11:00:47.917495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79664 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80176 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80184 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80192 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80200 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80216 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80224 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.917743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:15.606 [2024-12-09 11:00:47.917749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:15.606 [2024-12-09 11:00:47.917760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:18:15.606 [2024-12-09 11:00:47.917784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.918678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:15.606 [2024-12-09 11:00:47.918735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.606 [2024-12-09 11:00:47.918747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:15.606 [2024-12-09 11:00:47.918790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf291e0 (9): Bad file descriptor 00:18:15.606 [2024-12-09 11:00:47.919084] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:15.606 [2024-12-09 11:00:47.919115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf291e0 with addr=10.0.0.3, port=4421 00:18:15.606 [2024-12-09 11:00:47.919125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf291e0 is same with the state(6) to be set 00:18:15.606 [2024-12-09 11:00:47.919159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf291e0 (9): Bad file descriptor 00:18:15.606 [2024-12-09 11:00:47.919178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:15.606 [2024-12-09 11:00:47.919187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:15.606 [2024-12-09 11:00:47.919196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:15.606 [2024-12-09 11:00:47.919204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:15.606 [2024-12-09 11:00:47.919216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:15.606 7034.29 IOPS, 27.48 MiB/s [2024-12-09T11:01:08.785Z] 7101.78 IOPS, 27.74 MiB/s [2024-12-09T11:01:08.785Z] 7167.00 IOPS, 28.00 MiB/s [2024-12-09T11:01:08.785Z] 7221.26 IOPS, 28.21 MiB/s [2024-12-09T11:01:08.785Z] 7264.41 IOPS, 28.38 MiB/s [2024-12-09T11:01:08.785Z] 7312.20 IOPS, 28.56 MiB/s [2024-12-09T11:01:08.785Z] 7354.73 IOPS, 28.73 MiB/s [2024-12-09T11:01:08.785Z] 7399.81 IOPS, 28.91 MiB/s [2024-12-09T11:01:08.785Z] 7439.07 IOPS, 29.06 MiB/s [2024-12-09T11:01:08.785Z] 7483.27 IOPS, 29.23 MiB/s [2024-12-09T11:01:08.785Z] [2024-12-09 11:00:57.952689] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:15.606 7526.20 IOPS, 29.40 MiB/s [2024-12-09T11:01:08.785Z] 7563.98 IOPS, 29.55 MiB/s [2024-12-09T11:01:08.785Z] 7600.83 IOPS, 29.69 MiB/s [2024-12-09T11:01:08.785Z] 7634.48 IOPS, 29.82 MiB/s [2024-12-09T11:01:08.785Z] 7663.39 IOPS, 29.94 MiB/s [2024-12-09T11:01:08.785Z] 7695.18 IOPS, 30.06 MiB/s [2024-12-09T11:01:08.785Z] 7723.43 IOPS, 30.17 MiB/s [2024-12-09T11:01:08.785Z] 7749.12 IOPS, 30.27 MiB/s [2024-12-09T11:01:08.785Z] 7774.62 IOPS, 30.37 MiB/s [2024-12-09T11:01:08.785Z] 7799.72 IOPS, 30.47 MiB/s [2024-12-09T11:01:08.785Z] Received shutdown signal, test time was about 54.293959 seconds 00:18:15.606 00:18:15.606 Latency(us) 00:18:15.606 [2024-12-09T11:01:08.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.606 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:15.606 Verification LBA range: start 0x0 length 0x4000 00:18:15.606 Nvme0n1 : 54.29 7806.08 30.49 0.00 0.00 16379.37 568.79 7033243.39 00:18:15.606 [2024-12-09T11:01:08.785Z] =================================================================================================================== 00:18:15.606 [2024-12-09T11:01:08.785Z] Total : 7806.08 30.49 0.00 0.00 16379.37 568.79 7033243.39 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:15.606 rmmod nvme_tcp 00:18:15.606 rmmod nvme_fabrics 00:18:15.606 rmmod nvme_keyring 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:15.606 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80726 ']' 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80726 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 80726 ']' 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 80726 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80726 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80726' 00:18:15.607 killing process with pid 80726 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 80726 00:18:15.607 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 80726 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:15.867 11:01:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.867 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.867 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:15.867 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.867 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:15.867 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:16.128 00:18:16.128 real 1m0.011s 00:18:16.128 user 2m46.990s 00:18:16.128 sys 0m16.290s 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.128 ************************************ 00:18:16.128 END TEST nvmf_host_multipath 00:18:16.128 ************************************ 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.128 ************************************ 00:18:16.128 START TEST nvmf_timeout 00:18:16.128 ************************************ 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:16.128 * Looking for test storage... 00:18:16.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:18:16.128 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.389 --rc genhtml_branch_coverage=1 00:18:16.389 --rc genhtml_function_coverage=1 00:18:16.389 --rc genhtml_legend=1 00:18:16.389 --rc geninfo_all_blocks=1 00:18:16.389 --rc geninfo_unexecuted_blocks=1 00:18:16.389 00:18:16.389 ' 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.389 --rc genhtml_branch_coverage=1 00:18:16.389 --rc genhtml_function_coverage=1 00:18:16.389 --rc genhtml_legend=1 00:18:16.389 --rc geninfo_all_blocks=1 00:18:16.389 --rc geninfo_unexecuted_blocks=1 00:18:16.389 00:18:16.389 ' 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.389 --rc genhtml_branch_coverage=1 00:18:16.389 --rc genhtml_function_coverage=1 00:18:16.389 --rc genhtml_legend=1 00:18:16.389 --rc geninfo_all_blocks=1 00:18:16.389 --rc geninfo_unexecuted_blocks=1 00:18:16.389 00:18:16.389 ' 00:18:16.389 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.389 --rc genhtml_branch_coverage=1 00:18:16.389 --rc genhtml_function_coverage=1 00:18:16.389 --rc genhtml_legend=1 00:18:16.389 --rc geninfo_all_blocks=1 00:18:16.389 --rc geninfo_unexecuted_blocks=1 00:18:16.389 00:18:16.389 ' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.390 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:16.390 Cannot find device "nvmf_init_br" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:16.390 Cannot find device "nvmf_init_br2" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:16.390 Cannot find device "nvmf_tgt_br" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.390 Cannot find device "nvmf_tgt_br2" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:16.390 Cannot find device "nvmf_init_br" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:16.390 Cannot find device "nvmf_init_br2" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:16.390 Cannot find device "nvmf_tgt_br" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:16.390 Cannot find device "nvmf_tgt_br2" 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:16.390 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:16.650 Cannot find device "nvmf_br" 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:16.650 Cannot find device "nvmf_init_if" 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:16.650 Cannot find device "nvmf_init_if2" 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.650 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:16.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:18:16.651 00:18:16.651 --- 10.0.0.3 ping statistics --- 00:18:16.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.651 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:16.651 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:16.651 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:18:16.651 00:18:16.651 --- 10.0.0.4 ping statistics --- 00:18:16.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.651 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:18:16.651 00:18:16.651 --- 10.0.0.1 ping statistics --- 00:18:16.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.651 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:16.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:18:16.651 00:18:16.651 --- 10.0.0.2 ping statistics --- 00:18:16.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.651 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:16.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81939 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81939 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81939 ']' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.651 11:01:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:16.911 [2024-12-09 11:01:09.857125] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:18:16.911 [2024-12-09 11:01:09.857188] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.911 [2024-12-09 11:01:10.008851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:16.911 [2024-12-09 11:01:10.075639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.911 [2024-12-09 11:01:10.075696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.911 [2024-12-09 11:01:10.075703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.911 [2024-12-09 11:01:10.075709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.911 [2024-12-09 11:01:10.075714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.911 [2024-12-09 11:01:10.077126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.911 [2024-12-09 11:01:10.077135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.171 [2024-12-09 11:01:10.154849] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.740 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.000 [2024-12-09 11:01:10.920596] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.000 11:01:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:18.000 Malloc0 00:18:18.000 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:18.259 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:18.519 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:18.778 [2024-12-09 11:01:11.697807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81981 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81981 /var/tmp/bdevperf.sock 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 81981 ']' 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.778 11:01:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:18.778 [2024-12-09 11:01:11.765764] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:18:18.778 [2024-12-09 11:01:11.765824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81981 ] 00:18:18.778 [2024-12-09 11:01:11.917609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.037 [2024-12-09 11:01:11.963590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.037 [2024-12-09 11:01:12.003574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:19.604 11:01:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.604 11:01:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:19.604 11:01:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:19.862 11:01:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:20.122 NVMe0n1 00:18:20.122 11:01:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82005 00:18:20.122 11:01:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.122 11:01:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:20.122 Running I/O for 10 seconds... 00:18:21.058 11:01:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:21.319 7166.00 IOPS, 27.99 MiB/s [2024-12-09T11:01:14.498Z] [2024-12-09 11:01:14.275953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.319 [2024-12-09 11:01:14.275995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.319 [2024-12-09 11:01:14.276025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.319 [2024-12-09 11:01:14.276044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:21.319 [2024-12-09 11:01:14.276056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912e50 is same with the state(6) to be set 00:18:21.319 [2024-12-09 11:01:14.276247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.319 [2024-12-09 11:01:14.276258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.319 [2024-12-09 11:01:14.276387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.319 [2024-12-09 11:01:14.276394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.320 [2024-12-09 11:01:14.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.320 [2024-12-09 11:01:14.276964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.276970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.276975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.276981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.276990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.276997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.321 [2024-12-09 11:01:14.277562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.321 [2024-12-09 11:01:14.277567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.277964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.277971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.278001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.278008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.278035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.278042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.278047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.278053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.278060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.278066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:21.322 [2024-12-09 11:01:14.278076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.278082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972690 is same with the state(6) to be set 00:18:21.322 [2024-12-09 11:01:14.278089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:21.322 [2024-12-09 11:01:14.278093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:21.322 [2024-12-09 11:01:14.278098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:18:21.322 [2024-12-09 11:01:14.278109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:21.322 [2024-12-09 11:01:14.278341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:21.322 [2024-12-09 11:01:14.278364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912e50 (9): Bad file descriptor 00:18:21.322 [2024-12-09 11:01:14.278430] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:21.322 [2024-12-09 11:01:14.278441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1912e50 with addr=10.0.0.3, port=4420 00:18:21.322 [2024-12-09 11:01:14.278448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912e50 is same with the state(6) to be set 00:18:21.322 [2024-12-09 11:01:14.278457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912e50 (9): Bad file descriptor 00:18:21.322 [2024-12-09 11:01:14.278467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:21.322 [2024-12-09 11:01:14.278478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:21.322 [2024-12-09 11:01:14.278484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:21.323 [2024-12-09 11:01:14.278491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:21.323 [2024-12-09 11:01:14.278496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:21.323 11:01:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:23.192 3956.00 IOPS, 15.45 MiB/s [2024-12-09T11:01:16.371Z] 2637.33 IOPS, 10.30 MiB/s [2024-12-09T11:01:16.371Z] [2024-12-09 11:01:16.274894] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:23.192 [2024-12-09 11:01:16.274959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1912e50 with addr=10.0.0.3, port=4420 00:18:23.192 [2024-12-09 11:01:16.274969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912e50 is same with the state(6) to be set 00:18:23.192 [2024-12-09 11:01:16.274985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912e50 (9): Bad file descriptor 00:18:23.192 [2024-12-09 11:01:16.274998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:23.192 [2024-12-09 11:01:16.275005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:23.192 [2024-12-09 11:01:16.275012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:23.192 [2024-12-09 11:01:16.275019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:23.192 [2024-12-09 11:01:16.275026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:23.192 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:23.192 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.192 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:23.451 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:23.451 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:23.451 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:23.451 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:23.709 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:23.709 11:01:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:25.211 1978.00 IOPS, 7.73 MiB/s [2024-12-09T11:01:18.390Z] 1582.40 IOPS, 6.18 MiB/s [2024-12-09T11:01:18.390Z] [2024-12-09 11:01:18.271396] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:25.211 [2024-12-09 11:01:18.271435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1912e50 with addr=10.0.0.3, port=4420 00:18:25.211 [2024-12-09 11:01:18.271445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912e50 is same with the state(6) to be set 00:18:25.211 [2024-12-09 11:01:18.271477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1912e50 (9): Bad file descriptor 00:18:25.211 [2024-12-09 11:01:18.271490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:25.211 [2024-12-09 11:01:18.271496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:25.211 [2024-12-09 11:01:18.271503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:25.211 [2024-12-09 11:01:18.271511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:25.211 [2024-12-09 11:01:18.271518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:27.084 1318.67 IOPS, 5.15 MiB/s [2024-12-09T11:01:20.522Z] 1130.29 IOPS, 4.42 MiB/s [2024-12-09T11:01:20.522Z] [2024-12-09 11:01:20.267864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:27.343 [2024-12-09 11:01:20.267919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:27.343 [2024-12-09 11:01:20.267928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:27.343 [2024-12-09 11:01:20.267937] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:27.343 [2024-12-09 11:01:20.267949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:28.281 989.00 IOPS, 3.86 MiB/s 00:18:28.281 Latency(us) 00:18:28.281 [2024-12-09T11:01:21.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.281 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:28.281 Verification LBA range: start 0x0 length 0x4000 00:18:28.281 NVMe0n1 : 8.10 977.15 3.82 15.81 0.00 129069.69 2690.12 7033243.39 00:18:28.281 [2024-12-09T11:01:21.460Z] =================================================================================================================== 00:18:28.281 [2024-12-09T11:01:21.460Z] Total : 977.15 3.82 15.81 0.00 129069.69 2690.12 7033243.39 00:18:28.281 { 00:18:28.281 "results": [ 00:18:28.281 { 00:18:28.281 "job": "NVMe0n1", 00:18:28.281 "core_mask": "0x4", 00:18:28.281 "workload": "verify", 00:18:28.281 "status": "finished", 00:18:28.281 "verify_range": { 00:18:28.281 "start": 0, 00:18:28.281 "length": 16384 00:18:28.281 }, 00:18:28.281 "queue_depth": 128, 00:18:28.281 "io_size": 4096, 00:18:28.281 "runtime": 8.097053, 00:18:28.281 "iops": 977.1456355787717, 00:18:28.281 "mibps": 3.816975138979577, 00:18:28.281 "io_failed": 128, 00:18:28.281 "io_timeout": 0, 00:18:28.281 "avg_latency_us": 129069.69234873666, 00:18:28.281 "min_latency_us": 2690.124017467249, 00:18:28.281 "max_latency_us": 7033243.388646288 00:18:28.281 } 00:18:28.281 ], 00:18:28.281 "core_count": 1 00:18:28.281 } 00:18:28.541 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:28.541 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.541 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:28.801 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:28.801 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:28.801 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:28.801 11:01:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82005 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81981 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81981 ']' 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81981 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81981 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81981' 00:18:29.060 killing process with pid 81981 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81981 00:18:29.060 Received shutdown signal, test time was about 8.971460 seconds 00:18:29.060 00:18:29.060 Latency(us) 00:18:29.060 [2024-12-09T11:01:22.239Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.060 [2024-12-09T11:01:22.239Z] =================================================================================================================== 00:18:29.060 [2024-12-09T11:01:22.239Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.060 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81981 00:18:29.320 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:29.579 [2024-12-09 11:01:22.626989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82129 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82129 /var/tmp/bdevperf.sock 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82129 ']' 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.579 11:01:22 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:29.579 [2024-12-09 11:01:22.690976] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:18:29.579 [2024-12-09 11:01:22.691031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82129 ] 00:18:29.839 [2024-12-09 11:01:22.821548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.839 [2024-12-09 11:01:22.883570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.839 [2024-12-09 11:01:22.957727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:30.778 11:01:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.778 11:01:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:30.778 11:01:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:30.778 11:01:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:31.038 NVMe0n1 00:18:31.038 11:01:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:31.038 11:01:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82147 00:18:31.038 11:01:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:31.038 Running I/O for 10 seconds... 00:18:31.977 11:01:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:32.236 11107.00 IOPS, 43.39 MiB/s [2024-12-09T11:01:25.415Z] [2024-12-09 11:01:25.263173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.236 [2024-12-09 11:01:25.263342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.236 [2024-12-09 11:01:25.263357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.236 [2024-12-09 11:01:25.263371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.236 [2024-12-09 11:01:25.263388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.236 [2024-12-09 11:01:25.263396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.237 [2024-12-09 11:01:25.263857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.263985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.263991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.264041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.264049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.264056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.264064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.264094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.264101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.237 [2024-12-09 11:01:25.264109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.237 [2024-12-09 11:01:25.264115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.238 [2024-12-09 11:01:25.264691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.238 [2024-12-09 11:01:25.264765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.238 [2024-12-09 11:01:25.264773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.239 [2024-12-09 11:01:25.264964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.264977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.264985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.264997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:32.239 [2024-12-09 11:01:25.265206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a3690 is same with the state(6) to be set 00:18:32.239 [2024-12-09 11:01:25.265223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99024 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99368 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99376 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.239 [2024-12-09 11:01:25.265401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.239 [2024-12-09 11:01:25.265406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:18:32.239 [2024-12-09 11:01:25.265412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.239 [2024-12-09 11:01:25.265418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:32.240 [2024-12-09 11:01:25.265423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:32.240 [2024-12-09 11:01:25.265428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:18:32.240 [2024-12-09 11:01:25.265439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.240 [2024-12-09 11:01:25.265613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.240 [2024-12-09 11:01:25.265632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.240 [2024-12-09 11:01:25.265641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.240 [2024-12-09 11:01:25.265647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.240 [2024-12-09 11:01:25.265655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.240 [2024-12-09 11:01:25.265661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.240 [2024-12-09 11:01:25.265668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:32.240 [2024-12-09 11:01:25.265674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:32.240 [2024-12-09 11:01:25.265680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:32.240 [2024-12-09 11:01:25.265851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:32.240 [2024-12-09 11:01:25.265891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:32.240 [2024-12-09 11:01:25.265977] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:32.240 [2024-12-09 11:01:25.265993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1843e50 with addr=10.0.0.3, port=4420 00:18:32.240 [2024-12-09 11:01:25.266000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:32.240 [2024-12-09 11:01:25.266013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:32.240 [2024-12-09 11:01:25.266026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:32.240 [2024-12-09 11:01:25.266034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:32.240 [2024-12-09 11:01:25.266045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:32.240 [2024-12-09 11:01:25.266054] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:32.240 11:01:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:18:32.240 [2024-12-09 11:01:25.283151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:33.200 6149.50 IOPS, 24.02 MiB/s [2024-12-09T11:01:26.379Z] 11:01:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:33.200 [2024-12-09 11:01:26.281409] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:33.200 [2024-12-09 11:01:26.281457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1843e50 with addr=10.0.0.3, port=4420 00:18:33.200 [2024-12-09 11:01:26.281469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:33.200 [2024-12-09 11:01:26.281490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:33.200 [2024-12-09 11:01:26.281506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:33.200 [2024-12-09 11:01:26.281514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:33.200 [2024-12-09 11:01:26.281524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:33.200 [2024-12-09 11:01:26.281532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:33.200 [2024-12-09 11:01:26.281543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:33.478 [2024-12-09 11:01:26.462724] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:33.478 11:01:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82147 00:18:34.307 4099.67 IOPS, 16.01 MiB/s [2024-12-09T11:01:27.486Z] [2024-12-09 11:01:27.299864] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:36.185 3074.75 IOPS, 12.01 MiB/s [2024-12-09T11:01:30.304Z] 4682.40 IOPS, 18.29 MiB/s [2024-12-09T11:01:31.243Z] 5952.17 IOPS, 23.25 MiB/s [2024-12-09T11:01:32.184Z] 6880.71 IOPS, 26.88 MiB/s [2024-12-09T11:01:33.565Z] 7554.00 IOPS, 29.51 MiB/s [2024-12-09T11:01:34.505Z] 8080.89 IOPS, 31.57 MiB/s [2024-12-09T11:01:34.505Z] 8506.60 IOPS, 33.23 MiB/s 00:18:41.326 Latency(us) 00:18:41.326 [2024-12-09T11:01:34.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.326 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.326 Verification LBA range: start 0x0 length 0x4000 00:18:41.326 NVMe0n1 : 10.01 8511.15 33.25 0.00 0.00 15014.61 897.90 3018433.62 00:18:41.326 [2024-12-09T11:01:34.505Z] =================================================================================================================== 00:18:41.326 [2024-12-09T11:01:34.505Z] Total : 8511.15 33.25 0.00 0.00 15014.61 897.90 3018433.62 00:18:41.326 { 00:18:41.326 "results": [ 00:18:41.326 { 00:18:41.326 "job": "NVMe0n1", 00:18:41.326 "core_mask": "0x4", 00:18:41.326 "workload": "verify", 00:18:41.326 "status": "finished", 00:18:41.326 "verify_range": { 00:18:41.326 "start": 0, 00:18:41.326 "length": 16384 00:18:41.326 }, 00:18:41.326 "queue_depth": 128, 00:18:41.326 "io_size": 4096, 00:18:41.326 "runtime": 10.005694, 00:18:41.326 "iops": 8511.153749055287, 00:18:41.326 "mibps": 33.246694332247216, 00:18:41.326 "io_failed": 0, 00:18:41.326 "io_timeout": 0, 00:18:41.326 "avg_latency_us": 15014.606373781897, 00:18:41.326 "min_latency_us": 897.9004366812227, 00:18:41.326 "max_latency_us": 3018433.6209606985 00:18:41.326 } 00:18:41.326 ], 00:18:41.326 "core_count": 1 00:18:41.326 } 00:18:41.326 11:01:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82257 00:18:41.327 11:01:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.327 11:01:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:18:41.327 Running I/O for 10 seconds... 00:18:42.268 11:01:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.268 10381.00 IOPS, 40.55 MiB/s [2024-12-09T11:01:35.447Z] [2024-12-09 11:01:35.372353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.268 [2024-12-09 11:01:35.372553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.268 [2024-12-09 11:01:35.372565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.372716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.372985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.372993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.269 [2024-12-09 11:01:35.373331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.269 [2024-12-09 11:01:35.373509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.269 [2024-12-09 11:01:35.373517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.373523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.373537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.373557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.270 [2024-12-09 11:01:35.373945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.373968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.373993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.270 [2024-12-09 11:01:35.374300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.270 [2024-12-09 11:01:35.374315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:42.271 [2024-12-09 11:01:35.374511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:42.271 [2024-12-09 11:01:35.374818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a21b0 is same with the state(6) to be set 00:18:42.271 [2024-12-09 11:01:35.374842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.271 [2024-12-09 11:01:35.374849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.271 [2024-12-09 11:01:35.374854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92896 len:8 PRP1 0x0 PRP2 0x0 00:18:42.271 [2024-12-09 11:01:35.374860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.271 [2024-12-09 11:01:35.374874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.271 [2024-12-09 11:01:35.374888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93352 len:8 PRP1 0x0 PRP2 0x0 00:18:42.271 [2024-12-09 11:01:35.374895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.271 [2024-12-09 11:01:35.374907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.271 [2024-12-09 11:01:35.374912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93360 len:8 PRP1 0x0 PRP2 0x0 00:18:42.271 [2024-12-09 11:01:35.374918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.271 [2024-12-09 11:01:35.374937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.271 [2024-12-09 11:01:35.374943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93368 len:8 PRP1 0x0 PRP2 0x0 00:18:42.271 [2024-12-09 11:01:35.374949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.271 [2024-12-09 11:01:35.374961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.271 [2024-12-09 11:01:35.374974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93376 len:8 PRP1 0x0 PRP2 0x0 00:18:42.271 [2024-12-09 11:01:35.374981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.374987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.271 [2024-12-09 11:01:35.374991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.271 [2024-12-09 11:01:35.374997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93384 len:8 PRP1 0x0 PRP2 0x0 00:18:42.271 [2024-12-09 11:01:35.375003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.271 [2024-12-09 11:01:35.375015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.272 [2024-12-09 11:01:35.375021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.272 [2024-12-09 11:01:35.375026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93392 len:8 PRP1 0x0 PRP2 0x0 00:18:42.272 [2024-12-09 11:01:35.375032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.272 [2024-12-09 11:01:35.375039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.272 [2024-12-09 11:01:35.375044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.272 [2024-12-09 11:01:35.375049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93400 len:8 PRP1 0x0 PRP2 0x0 00:18:42.272 [2024-12-09 11:01:35.375064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.272 [2024-12-09 11:01:35.375071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:42.272 [2024-12-09 11:01:35.375076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:42.272 [2024-12-09 11:01:35.375081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93408 len:8 PRP1 0x0 PRP2 0x0 00:18:42.272 [2024-12-09 11:01:35.375087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:42.272 [2024-12-09 11:01:35.375329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:42.272 [2024-12-09 11:01:35.375411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:42.272 [2024-12-09 11:01:35.375529] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:42.272 [2024-12-09 11:01:35.375551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1843e50 with addr=10.0.0.3, port=4420 00:18:42.272 [2024-12-09 11:01:35.375559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:42.272 [2024-12-09 11:01:35.375574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:42.272 [2024-12-09 11:01:35.375587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:42.272 [2024-12-09 11:01:35.375594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:42.272 [2024-12-09 11:01:35.375603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:42.272 [2024-12-09 11:01:35.375611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:42.272 [2024-12-09 11:01:35.375620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:42.272 11:01:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:18:43.210 5774.50 IOPS, 22.56 MiB/s [2024-12-09T11:01:36.389Z] [2024-12-09 11:01:36.373822] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:43.210 [2024-12-09 11:01:36.373887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1843e50 with addr=10.0.0.3, port=4420 00:18:43.210 [2024-12-09 11:01:36.373898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:43.210 [2024-12-09 11:01:36.373918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:43.210 [2024-12-09 11:01:36.373933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:43.210 [2024-12-09 11:01:36.373942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:43.210 [2024-12-09 11:01:36.373952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:43.210 [2024-12-09 11:01:36.373961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:43.210 [2024-12-09 11:01:36.373972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:44.409 3849.67 IOPS, 15.04 MiB/s [2024-12-09T11:01:37.588Z] [2024-12-09 11:01:37.372168] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.409 [2024-12-09 11:01:37.372225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1843e50 with addr=10.0.0.3, port=4420 00:18:44.409 [2024-12-09 11:01:37.372238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:44.409 [2024-12-09 11:01:37.372261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:44.409 [2024-12-09 11:01:37.372280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:44.409 [2024-12-09 11:01:37.372289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:44.410 [2024-12-09 11:01:37.372299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:44.410 [2024-12-09 11:01:37.372310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:44.410 [2024-12-09 11:01:37.372322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:45.349 2887.25 IOPS, 11.28 MiB/s [2024-12-09T11:01:38.528Z] [2024-12-09 11:01:38.372901] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.349 [2024-12-09 11:01:38.372956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1843e50 with addr=10.0.0.3, port=4420 00:18:45.349 [2024-12-09 11:01:38.372968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1843e50 is same with the state(6) to be set 00:18:45.349 [2024-12-09 11:01:38.373151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1843e50 (9): Bad file descriptor 00:18:45.349 [2024-12-09 11:01:38.373362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:18:45.349 [2024-12-09 11:01:38.373382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:18:45.349 [2024-12-09 11:01:38.373403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:45.349 [2024-12-09 11:01:38.373413] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:18:45.349 [2024-12-09 11:01:38.373424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:45.349 11:01:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:45.609 [2024-12-09 11:01:38.598441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.609 11:01:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82257 00:18:46.438 2309.80 IOPS, 9.02 MiB/s [2024-12-09T11:01:39.617Z] [2024-12-09 11:01:39.398920] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:18:48.316 3676.83 IOPS, 14.36 MiB/s [2024-12-09T11:01:42.435Z] 4896.86 IOPS, 19.13 MiB/s [2024-12-09T11:01:43.373Z] 5800.75 IOPS, 22.66 MiB/s [2024-12-09T11:01:44.311Z] 6510.89 IOPS, 25.43 MiB/s [2024-12-09T11:01:44.311Z] 7072.60 IOPS, 27.63 MiB/s 00:18:51.132 Latency(us) 00:18:51.132 [2024-12-09T11:01:44.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.132 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:51.132 Verification LBA range: start 0x0 length 0x4000 00:18:51.132 NVMe0n1 : 10.01 7077.62 27.65 5517.28 0.00 10145.64 493.67 3018433.62 00:18:51.132 [2024-12-09T11:01:44.311Z] =================================================================================================================== 00:18:51.132 [2024-12-09T11:01:44.311Z] Total : 7077.62 27.65 5517.28 0.00 10145.64 0.00 3018433.62 00:18:51.132 { 00:18:51.132 "results": [ 00:18:51.132 { 00:18:51.132 "job": "NVMe0n1", 00:18:51.132 "core_mask": "0x4", 00:18:51.132 "workload": "verify", 00:18:51.132 "status": "finished", 00:18:51.132 "verify_range": { 00:18:51.132 "start": 0, 00:18:51.132 "length": 16384 00:18:51.132 }, 00:18:51.132 "queue_depth": 128, 00:18:51.132 "io_size": 4096, 00:18:51.132 "runtime": 10.00746, 00:18:51.132 "iops": 7077.620095408825, 00:18:51.132 "mibps": 27.646953497690724, 00:18:51.132 "io_failed": 55214, 00:18:51.132 "io_timeout": 0, 00:18:51.132 "avg_latency_us": 10145.637984957446, 00:18:51.132 "min_latency_us": 493.6663755458515, 00:18:51.132 "max_latency_us": 3018433.6209606985 00:18:51.132 } 00:18:51.132 ], 00:18:51.132 "core_count": 1 00:18:51.132 } 00:18:51.132 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82129 00:18:51.132 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82129 ']' 00:18:51.132 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82129 00:18:51.132 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:18:51.132 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.132 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82129 00:18:51.392 killing process with pid 82129 00:18:51.392 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.392 00:18:51.392 Latency(us) 00:18:51.392 [2024-12-09T11:01:44.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.392 [2024-12-09T11:01:44.571Z] =================================================================================================================== 00:18:51.392 [2024-12-09T11:01:44.571Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.392 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:51.392 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:51.392 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82129' 00:18:51.392 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82129 00:18:51.392 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82129 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82371 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82371 /var/tmp/bdevperf.sock 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82371 ']' 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.652 11:01:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:51.652 [2024-12-09 11:01:44.691610] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:18:51.653 [2024-12-09 11:01:44.691677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82371 ] 00:18:51.912 [2024-12-09 11:01:44.843361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.912 [2024-12-09 11:01:44.906714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.912 [2024-12-09 11:01:44.980793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:52.484 11:01:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.484 11:01:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:18:52.484 11:01:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82387 00:18:52.484 11:01:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:52.484 11:01:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82371 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:52.742 11:01:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:53.002 NVMe0n1 00:18:53.002 11:01:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82429 00:18:53.002 11:01:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.002 11:01:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:18:53.002 Running I/O for 10 seconds... 00:18:53.940 11:01:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:54.204 19812.00 IOPS, 77.39 MiB/s [2024-12-09T11:01:47.383Z] [2024-12-09 11:01:47.210424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.204 [2024-12-09 11:01:47.210886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.210997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf95e10 is same with the state(6) to be set 00:18:54.205 [2024-12-09 11:01:47.211190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:114376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.205 [2024-12-09 11:01:47.211666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.205 [2024-12-09 11:01:47.211695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.211987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.211994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.206 [2024-12-09 11:01:47.212526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.206 [2024-12-09 11:01:47.212536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.212984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.212991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.207 [2024-12-09 11:01:47.213323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.207 [2024-12-09 11:01:47.213333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:54.208 [2024-12-09 11:01:47.213801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.213820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd3920 is same with the state(6) to be set 00:18:54.208 [2024-12-09 11:01:47.213830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:54.208 [2024-12-09 11:01:47.213835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:54.208 [2024-12-09 11:01:47.213842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59776 len:8 PRP1 0x0 PRP2 0x0 00:18:54.208 [2024-12-09 11:01:47.213848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:54.208 [2024-12-09 11:01:47.214182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:54.208 [2024-12-09 11:01:47.214264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66e50 (9): Bad file descriptor 00:18:54.208 [2024-12-09 11:01:47.214379] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:54.208 [2024-12-09 11:01:47.214401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b66e50 with addr=10.0.0.3, port=4420 00:18:54.208 [2024-12-09 11:01:47.214410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b66e50 is same with the state(6) to be set 00:18:54.208 [2024-12-09 11:01:47.214425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66e50 (9): Bad file descriptor 00:18:54.208 [2024-12-09 11:01:47.214438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:54.208 [2024-12-09 11:01:47.214446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:54.208 [2024-12-09 11:01:47.214456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:54.208 [2024-12-09 11:01:47.214465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:54.208 [2024-12-09 11:01:47.214473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:54.208 11:01:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82429 00:18:56.126 10796.00 IOPS, 42.17 MiB/s [2024-12-09T11:01:49.305Z] 7197.33 IOPS, 28.11 MiB/s [2024-12-09T11:01:49.305Z] [2024-12-09 11:01:49.210860] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.126 [2024-12-09 11:01:49.210930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b66e50 with addr=10.0.0.3, port=4420 00:18:56.126 [2024-12-09 11:01:49.210944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b66e50 is same with the state(6) to be set 00:18:56.126 [2024-12-09 11:01:49.210971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66e50 (9): Bad file descriptor 00:18:56.126 [2024-12-09 11:01:49.210991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:56.126 [2024-12-09 11:01:49.211000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:56.126 [2024-12-09 11:01:49.211010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:56.126 [2024-12-09 11:01:49.211020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:56.126 [2024-12-09 11:01:49.211031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:58.000 5398.00 IOPS, 21.09 MiB/s [2024-12-09T11:01:51.438Z] 4318.40 IOPS, 16.87 MiB/s [2024-12-09T11:01:51.438Z] [2024-12-09 11:01:51.207395] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:58.259 [2024-12-09 11:01:51.207460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b66e50 with addr=10.0.0.3, port=4420 00:18:58.259 [2024-12-09 11:01:51.207474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b66e50 is same with the state(6) to be set 00:18:58.259 [2024-12-09 11:01:51.207500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b66e50 (9): Bad file descriptor 00:18:58.259 [2024-12-09 11:01:51.207518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:18:58.259 [2024-12-09 11:01:51.207527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:18:58.259 [2024-12-09 11:01:51.207537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:58.259 [2024-12-09 11:01:51.207547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:18:58.259 [2024-12-09 11:01:51.207556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:00.132 3598.67 IOPS, 14.06 MiB/s [2024-12-09T11:01:53.311Z] 3084.57 IOPS, 12.05 MiB/s [2024-12-09T11:01:53.311Z] [2024-12-09 11:01:53.203804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:00.132 [2024-12-09 11:01:53.203856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:00.132 [2024-12-09 11:01:53.203865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:00.132 [2024-12-09 11:01:53.203873] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:00.132 [2024-12-09 11:01:53.203883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:01.071 2699.00 IOPS, 10.54 MiB/s 00:19:01.071 Latency(us) 00:19:01.071 [2024-12-09T11:01:54.250Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.071 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:01.071 NVMe0n1 : 8.10 2664.96 10.41 15.80 0.00 47786.56 6124.32 7033243.39 00:19:01.071 [2024-12-09T11:01:54.250Z] =================================================================================================================== 00:19:01.071 [2024-12-09T11:01:54.250Z] Total : 2664.96 10.41 15.80 0.00 47786.56 6124.32 7033243.39 00:19:01.071 { 00:19:01.071 "results": [ 00:19:01.071 { 00:19:01.071 "job": "NVMe0n1", 00:19:01.071 "core_mask": "0x4", 00:19:01.071 "workload": "randread", 00:19:01.071 "status": "finished", 00:19:01.071 "queue_depth": 128, 00:19:01.071 "io_size": 4096, 00:19:01.071 "runtime": 8.102189, 00:19:01.071 "iops": 2664.9588154509847, 00:19:01.071 "mibps": 10.409995372855409, 00:19:01.071 "io_failed": 128, 00:19:01.071 "io_timeout": 0, 00:19:01.071 "avg_latency_us": 47786.55532919974, 00:19:01.071 "min_latency_us": 6124.324890829695, 00:19:01.071 "max_latency_us": 7033243.388646288 00:19:01.071 } 00:19:01.071 ], 00:19:01.071 "core_count": 1 00:19:01.071 } 00:19:01.071 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.071 Attaching 5 probes... 00:19:01.071 1179.727027: reset bdev controller NVMe0 00:19:01.071 1179.854497: reconnect bdev controller NVMe0 00:19:01.071 3176.264672: reconnect delay bdev controller NVMe0 00:19:01.071 3176.289096: reconnect bdev controller NVMe0 00:19:01.071 5172.812186: reconnect delay bdev controller NVMe0 00:19:01.071 5172.834008: reconnect bdev controller NVMe0 00:19:01.071 7169.338632: reconnect delay bdev controller NVMe0 00:19:01.071 7169.357875: reconnect bdev controller NVMe0 00:19:01.071 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:01.071 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:01.071 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82387 00:19:01.071 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82371 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82371 ']' 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82371 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82371 00:19:01.330 killing process with pid 82371 00:19:01.330 Received shutdown signal, test time was about 8.194506 seconds 00:19:01.330 00:19:01.330 Latency(us) 00:19:01.330 [2024-12-09T11:01:54.509Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.330 [2024-12-09T11:01:54.509Z] =================================================================================================================== 00:19:01.330 [2024-12-09T11:01:54.509Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82371' 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82371 00:19:01.330 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82371 00:19:01.588 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.848 rmmod nvme_tcp 00:19:01.848 rmmod nvme_fabrics 00:19:01.848 rmmod nvme_keyring 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81939 ']' 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81939 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 81939 ']' 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 81939 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81939 00:19:01.848 killing process with pid 81939 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81939' 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 81939 00:19:01.848 11:01:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 81939 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:02.117 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:02.377 00:19:02.377 real 0m46.311s 00:19:02.377 user 2m13.752s 00:19:02.377 sys 0m5.973s 00:19:02.377 ************************************ 00:19:02.377 END TEST nvmf_timeout 00:19:02.377 ************************************ 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:02.377 00:19:02.377 real 4m59.327s 00:19:02.377 user 12m49.000s 00:19:02.377 sys 1m5.812s 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.377 11:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.377 ************************************ 00:19:02.377 END TEST nvmf_host 00:19:02.377 ************************************ 00:19:02.636 11:01:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:02.636 11:01:55 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:02.636 00:19:02.636 real 12m9.547s 00:19:02.637 user 28m50.094s 00:19:02.637 sys 2m58.792s 00:19:02.637 11:01:55 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.637 11:01:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.637 ************************************ 00:19:02.637 END TEST nvmf_tcp 00:19:02.637 ************************************ 00:19:02.637 11:01:55 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:19:02.637 11:01:55 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:02.637 11:01:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:02.637 11:01:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.637 11:01:55 -- common/autotest_common.sh@10 -- # set +x 00:19:02.637 ************************************ 00:19:02.637 START TEST nvmf_dif 00:19:02.637 ************************************ 00:19:02.637 11:01:55 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:02.637 * Looking for test storage... 00:19:02.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:02.637 11:01:55 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:02.637 11:01:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:19:02.637 11:01:55 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:02.897 11:01:55 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.897 11:01:55 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:02.897 11:01:55 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.897 11:01:55 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:02.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.897 --rc genhtml_branch_coverage=1 00:19:02.897 --rc genhtml_function_coverage=1 00:19:02.897 --rc genhtml_legend=1 00:19:02.897 --rc geninfo_all_blocks=1 00:19:02.897 --rc geninfo_unexecuted_blocks=1 00:19:02.897 00:19:02.897 ' 00:19:02.897 11:01:55 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:02.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.897 --rc genhtml_branch_coverage=1 00:19:02.897 --rc genhtml_function_coverage=1 00:19:02.897 --rc genhtml_legend=1 00:19:02.897 --rc geninfo_all_blocks=1 00:19:02.897 --rc geninfo_unexecuted_blocks=1 00:19:02.897 00:19:02.897 ' 00:19:02.898 11:01:55 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:02.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.898 --rc genhtml_branch_coverage=1 00:19:02.898 --rc genhtml_function_coverage=1 00:19:02.898 --rc genhtml_legend=1 00:19:02.898 --rc geninfo_all_blocks=1 00:19:02.898 --rc geninfo_unexecuted_blocks=1 00:19:02.898 00:19:02.898 ' 00:19:02.898 11:01:55 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:02.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.898 --rc genhtml_branch_coverage=1 00:19:02.898 --rc genhtml_function_coverage=1 00:19:02.898 --rc genhtml_legend=1 00:19:02.898 --rc geninfo_all_blocks=1 00:19:02.898 --rc geninfo_unexecuted_blocks=1 00:19:02.898 00:19:02.898 ' 00:19:02.898 11:01:55 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:02.898 11:01:55 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.898 11:01:55 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.898 11:01:55 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.898 11:01:55 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.898 11:01:55 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.898 11:01:55 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.898 11:01:55 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.898 11:01:55 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:02.898 11:01:55 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.898 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.898 11:01:55 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:02.898 11:01:55 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:02.898 11:01:55 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:02.898 11:01:55 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:02.898 11:01:55 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.898 11:01:55 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:02.898 11:01:55 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:02.898 Cannot find device "nvmf_init_br" 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:02.898 Cannot find device "nvmf_init_br2" 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:02.898 Cannot find device "nvmf_tgt_br" 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:02.898 Cannot find device "nvmf_tgt_br2" 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:02.898 11:01:55 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:02.898 Cannot find device "nvmf_init_br" 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:02.898 Cannot find device "nvmf_init_br2" 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:02.898 Cannot find device "nvmf_tgt_br" 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:02.898 Cannot find device "nvmf_tgt_br2" 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:02.898 11:01:56 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:03.158 Cannot find device "nvmf_br" 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:03.158 Cannot find device "nvmf_init_if" 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:03.158 Cannot find device "nvmf_init_if2" 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.158 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:03.158 11:01:56 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:03.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:03.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.176 ms 00:19:03.418 00:19:03.418 --- 10.0.0.3 ping statistics --- 00:19:03.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.418 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:03.418 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:03.418 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.101 ms 00:19:03.418 00:19:03.418 --- 10.0.0.4 ping statistics --- 00:19:03.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.418 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:03.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:19:03.418 00:19:03.418 --- 10.0.0.1 ping statistics --- 00:19:03.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.418 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:03.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:19:03.418 00:19:03.418 --- 10.0.0.2 ping statistics --- 00:19:03.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.418 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:03.418 11:01:56 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:03.419 11:01:56 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:03.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:03.988 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:03.988 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:03.988 11:01:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:03.988 11:01:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82922 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:03.988 11:01:57 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82922 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 82922 ']' 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.988 11:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:03.988 [2024-12-09 11:01:57.135015] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:19:03.988 [2024-12-09 11:01:57.135089] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.248 [2024-12-09 11:01:57.290302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.248 [2024-12-09 11:01:57.332607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.248 [2024-12-09 11:01:57.332653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.248 [2024-12-09 11:01:57.332659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.248 [2024-12-09 11:01:57.332664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.248 [2024-12-09 11:01:57.332668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.248 [2024-12-09 11:01:57.332967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.248 [2024-12-09 11:01:57.374372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.817 11:01:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.817 11:01:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:19:04.817 11:01:57 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.817 11:01:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.817 11:01:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 11:01:58 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.077 11:01:58 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:05.077 11:01:58 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:05.077 11:01:58 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.077 11:01:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 [2024-12-09 11:01:58.040329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.077 11:01:58 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.077 11:01:58 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:05.077 11:01:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:05.077 11:01:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.077 11:01:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 ************************************ 00:19:05.077 START TEST fio_dif_1_default 00:19:05.077 ************************************ 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 bdev_null0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:05.077 [2024-12-09 11:01:58.104283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.077 { 00:19:05.077 "params": { 00:19:05.077 "name": "Nvme$subsystem", 00:19:05.077 "trtype": "$TEST_TRANSPORT", 00:19:05.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.077 "adrfam": "ipv4", 00:19:05.077 "trsvcid": "$NVMF_PORT", 00:19:05.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.077 "hdgst": ${hdgst:-false}, 00:19:05.077 "ddgst": ${ddgst:-false} 00:19:05.077 }, 00:19:05.077 "method": "bdev_nvme_attach_controller" 00:19:05.077 } 00:19:05.077 EOF 00:19:05.077 )") 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:05.077 "params": { 00:19:05.077 "name": "Nvme0", 00:19:05.077 "trtype": "tcp", 00:19:05.077 "traddr": "10.0.0.3", 00:19:05.077 "adrfam": "ipv4", 00:19:05.077 "trsvcid": "4420", 00:19:05.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:05.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:05.077 "hdgst": false, 00:19:05.077 "ddgst": false 00:19:05.077 }, 00:19:05.077 "method": "bdev_nvme_attach_controller" 00:19:05.077 }' 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:05.077 11:01:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:05.337 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:05.337 fio-3.35 00:19:05.337 Starting 1 thread 00:19:17.549 00:19:17.549 filename0: (groupid=0, jobs=1): err= 0: pid=82989: Mon Dec 9 11:02:08 2024 00:19:17.549 read: IOPS=12.5k, BW=48.7MiB/s (51.0MB/s)(487MiB/10001msec) 00:19:17.549 slat (nsec): min=5220, max=54181, avg=5858.35, stdev=1279.49 00:19:17.549 clat (usec): min=259, max=2415, avg=305.11, stdev=31.24 00:19:17.549 lat (usec): min=264, max=2450, avg=310.97, stdev=31.47 00:19:17.549 clat percentiles (usec): 00:19:17.549 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 293], 00:19:17.549 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 306], 00:19:17.549 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 330], 00:19:17.549 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 570], 99.95th=[ 848], 00:19:17.549 | 99.99th=[ 1385] 00:19:17.549 bw ( KiB/s): min=48608, max=51744, per=100.00%, avg=49878.21, stdev=718.47, samples=19 00:19:17.549 iops : min=12152, max=12936, avg=12469.47, stdev=179.64, samples=19 00:19:17.549 lat (usec) : 500=99.83%, 750=0.09%, 1000=0.04% 00:19:17.549 lat (msec) : 2=0.04%, 4=0.01% 00:19:17.549 cpu : usr=84.68%, sys=13.64%, ctx=18, majf=0, minf=9 00:19:17.549 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:17.549 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.549 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.549 issued rwts: total=124560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.549 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:17.549 00:19:17.549 Run status group 0 (all jobs): 00:19:17.549 READ: bw=48.7MiB/s (51.0MB/s), 48.7MiB/s-48.7MiB/s (51.0MB/s-51.0MB/s), io=487MiB (510MB), run=10001-10001msec 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 00:19:17.549 real 0m11.018s 00:19:17.549 user 0m9.128s 00:19:17.549 sys 0m1.674s 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 ************************************ 00:19:17.549 END TEST fio_dif_1_default 00:19:17.549 ************************************ 00:19:17.549 11:02:09 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:17.549 11:02:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.549 11:02:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 ************************************ 00:19:17.549 START TEST fio_dif_1_multi_subsystems 00:19:17.549 ************************************ 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 bdev_null0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 [2024-12-09 11:02:09.190204] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 bdev_null1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:17.549 { 00:19:17.549 "params": { 00:19:17.549 "name": "Nvme$subsystem", 00:19:17.549 "trtype": "$TEST_TRANSPORT", 00:19:17.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.549 "adrfam": "ipv4", 00:19:17.549 "trsvcid": "$NVMF_PORT", 00:19:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.549 "hdgst": ${hdgst:-false}, 00:19:17.549 "ddgst": ${ddgst:-false} 00:19:17.549 }, 00:19:17.549 "method": "bdev_nvme_attach_controller" 00:19:17.549 } 00:19:17.549 EOF 00:19:17.549 )") 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:17.549 { 00:19:17.549 "params": { 00:19:17.549 "name": "Nvme$subsystem", 00:19:17.549 "trtype": "$TEST_TRANSPORT", 00:19:17.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:17.549 "adrfam": "ipv4", 00:19:17.549 "trsvcid": "$NVMF_PORT", 00:19:17.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:17.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:17.549 "hdgst": ${hdgst:-false}, 00:19:17.549 "ddgst": ${ddgst:-false} 00:19:17.549 }, 00:19:17.549 "method": "bdev_nvme_attach_controller" 00:19:17.549 } 00:19:17.549 EOF 00:19:17.549 )") 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:19:17.549 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:17.549 "params": { 00:19:17.549 "name": "Nvme0", 00:19:17.550 "trtype": "tcp", 00:19:17.550 "traddr": "10.0.0.3", 00:19:17.550 "adrfam": "ipv4", 00:19:17.550 "trsvcid": "4420", 00:19:17.550 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:17.550 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:17.550 "hdgst": false, 00:19:17.550 "ddgst": false 00:19:17.550 }, 00:19:17.550 "method": "bdev_nvme_attach_controller" 00:19:17.550 },{ 00:19:17.550 "params": { 00:19:17.550 "name": "Nvme1", 00:19:17.550 "trtype": "tcp", 00:19:17.550 "traddr": "10.0.0.3", 00:19:17.550 "adrfam": "ipv4", 00:19:17.550 "trsvcid": "4420", 00:19:17.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.550 "hdgst": false, 00:19:17.550 "ddgst": false 00:19:17.550 }, 00:19:17.550 "method": "bdev_nvme_attach_controller" 00:19:17.550 }' 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:17.550 11:02:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:17.550 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:17.550 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:17.550 fio-3.35 00:19:17.550 Starting 2 threads 00:19:27.537 00:19:27.537 filename0: (groupid=0, jobs=1): err= 0: pid=83154: Mon Dec 9 11:02:20 2024 00:19:27.537 read: IOPS=6508, BW=25.4MiB/s (26.7MB/s)(254MiB/10001msec) 00:19:27.537 slat (nsec): min=5265, max=70229, avg=13488.07, stdev=9128.39 00:19:27.537 clat (usec): min=309, max=1107, avg=572.05, stdev=31.06 00:19:27.537 lat (usec): min=315, max=1141, avg=585.54, stdev=34.16 00:19:27.537 clat percentiles (usec): 00:19:27.537 | 1.00th=[ 506], 5.00th=[ 523], 10.00th=[ 537], 20.00th=[ 545], 00:19:27.537 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:19:27.537 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 619], 00:19:27.537 | 99.00th=[ 644], 99.50th=[ 652], 99.90th=[ 709], 99.95th=[ 750], 00:19:27.537 | 99.99th=[ 857] 00:19:27.537 bw ( KiB/s): min=25728, max=26336, per=50.11%, avg=26078.89, stdev=156.74, samples=19 00:19:27.537 iops : min= 6432, max= 6584, avg=6519.68, stdev=39.15, samples=19 00:19:27.537 lat (usec) : 500=0.40%, 750=99.55%, 1000=0.04% 00:19:27.537 lat (msec) : 2=0.01% 00:19:27.537 cpu : usr=94.32%, sys=4.56%, ctx=33, majf=0, minf=0 00:19:27.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.537 issued rwts: total=65088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.537 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:27.537 filename1: (groupid=0, jobs=1): err= 0: pid=83155: Mon Dec 9 11:02:20 2024 00:19:27.537 read: IOPS=6502, BW=25.4MiB/s (26.6MB/s)(254MiB/10001msec) 00:19:27.537 slat (nsec): min=5220, max=83764, avg=9871.99, stdev=5525.26 00:19:27.537 clat (usec): min=503, max=1075, avg=584.87, stdev=34.71 00:19:27.537 lat (usec): min=509, max=1107, avg=594.74, stdev=36.34 00:19:27.537 clat percentiles (usec): 00:19:27.537 | 1.00th=[ 529], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 553], 00:19:27.537 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 594], 00:19:27.537 | 70.00th=[ 603], 80.00th=[ 619], 90.00th=[ 627], 95.00th=[ 644], 00:19:27.537 | 99.00th=[ 668], 99.50th=[ 676], 99.90th=[ 848], 99.95th=[ 922], 00:19:27.537 | 99.99th=[ 996] 00:19:27.537 bw ( KiB/s): min=25440, max=26336, per=50.06%, avg=26053.63, stdev=211.99, samples=19 00:19:27.537 iops : min= 6360, max= 6584, avg=6513.37, stdev=52.97, samples=19 00:19:27.537 lat (usec) : 750=99.85%, 1000=0.14% 00:19:27.537 lat (msec) : 2=0.01% 00:19:27.537 cpu : usr=93.41%, sys=5.65%, ctx=19, majf=0, minf=0 00:19:27.537 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.537 issued rwts: total=65028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.537 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:27.537 00:19:27.537 Run status group 0 (all jobs): 00:19:27.537 READ: bw=50.8MiB/s (53.3MB/s), 25.4MiB/s-25.4MiB/s (26.6MB/s-26.7MB/s), io=508MiB (533MB), run=10001-10001msec 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:27.537 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.538 00:19:27.538 real 0m11.196s 00:19:27.538 user 0m19.603s 00:19:27.538 sys 0m1.342s 00:19:27.538 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 ************************************ 00:19:27.538 END TEST fio_dif_1_multi_subsystems 00:19:27.538 ************************************ 00:19:27.538 11:02:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:27.538 11:02:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:27.538 11:02:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 ************************************ 00:19:27.538 START TEST fio_dif_rand_params 00:19:27.538 ************************************ 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 bdev_null0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:27.538 [2024-12-09 11:02:20.455709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:27.538 { 00:19:27.538 "params": { 00:19:27.538 "name": "Nvme$subsystem", 00:19:27.538 "trtype": "$TEST_TRANSPORT", 00:19:27.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:27.538 "adrfam": "ipv4", 00:19:27.538 "trsvcid": "$NVMF_PORT", 00:19:27.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:27.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:27.538 "hdgst": ${hdgst:-false}, 00:19:27.538 "ddgst": ${ddgst:-false} 00:19:27.538 }, 00:19:27.538 "method": "bdev_nvme_attach_controller" 00:19:27.538 } 00:19:27.538 EOF 00:19:27.538 )") 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:27.538 "params": { 00:19:27.538 "name": "Nvme0", 00:19:27.538 "trtype": "tcp", 00:19:27.538 "traddr": "10.0.0.3", 00:19:27.538 "adrfam": "ipv4", 00:19:27.538 "trsvcid": "4420", 00:19:27.538 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:27.538 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:27.538 "hdgst": false, 00:19:27.538 "ddgst": false 00:19:27.538 }, 00:19:27.538 "method": "bdev_nvme_attach_controller" 00:19:27.538 }' 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:27.538 11:02:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:27.538 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:27.538 ... 00:19:27.538 fio-3.35 00:19:27.538 Starting 3 threads 00:19:34.113 00:19:34.113 filename0: (groupid=0, jobs=1): err= 0: pid=83317: Mon Dec 9 11:02:26 2024 00:19:34.113 read: IOPS=331, BW=41.5MiB/s (43.5MB/s)(208MiB/5008msec) 00:19:34.113 slat (nsec): min=5456, max=55323, avg=14487.58, stdev=10125.11 00:19:34.113 clat (usec): min=3402, max=9272, avg=9003.07, stdev=358.99 00:19:34.113 lat (usec): min=3423, max=9303, avg=9017.56, stdev=359.58 00:19:34.113 clat percentiles (usec): 00:19:34.113 | 1.00th=[ 8717], 5.00th=[ 8848], 10.00th=[ 8848], 20.00th=[ 8979], 00:19:34.113 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 8979], 60.00th=[ 9110], 00:19:34.113 | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9110], 95.00th=[ 9110], 00:19:34.113 | 99.00th=[ 9241], 99.50th=[ 9241], 99.90th=[ 9241], 99.95th=[ 9241], 00:19:34.113 | 99.99th=[ 9241] 00:19:34.113 bw ( KiB/s): min=41472, max=43776, per=33.41%, avg=42470.40, stdev=632.27, samples=10 00:19:34.113 iops : min= 324, max= 342, avg=331.80, stdev= 4.94, samples=10 00:19:34.113 lat (msec) : 4=0.36%, 10=99.64% 00:19:34.113 cpu : usr=96.27%, sys=3.32%, ctx=13, majf=0, minf=0 00:19:34.113 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.113 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.113 filename0: (groupid=0, jobs=1): err= 0: pid=83318: Mon Dec 9 11:02:26 2024 00:19:34.113 read: IOPS=330, BW=41.3MiB/s (43.4MB/s)(207MiB/5006msec) 00:19:34.113 slat (nsec): min=5335, max=62647, avg=12345.40, stdev=8864.50 00:19:34.113 clat (usec): min=6139, max=9900, avg=9038.52, stdev=156.06 00:19:34.113 lat (usec): min=6146, max=9934, avg=9050.86, stdev=156.96 00:19:34.113 clat percentiles (usec): 00:19:34.113 | 1.00th=[ 8717], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:19:34.113 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9110], 00:19:34.113 | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9110], 95.00th=[ 9110], 00:19:34.113 | 99.00th=[ 9241], 99.50th=[ 9241], 99.90th=[ 9896], 99.95th=[ 9896], 00:19:34.113 | 99.99th=[ 9896] 00:19:34.113 bw ( KiB/s): min=42240, max=43008, per=33.36%, avg=42410.67, stdev=338.66, samples=9 00:19:34.113 iops : min= 330, max= 336, avg=331.33, stdev= 2.65, samples=9 00:19:34.113 lat (msec) : 10=100.00% 00:19:34.113 cpu : usr=95.94%, sys=3.62%, ctx=15, majf=0, minf=0 00:19:34.113 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.113 issued rwts: total=1656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.113 filename0: (groupid=0, jobs=1): err= 0: pid=83319: Mon Dec 9 11:02:26 2024 00:19:34.113 read: IOPS=330, BW=41.3MiB/s (43.4MB/s)(207MiB/5006msec) 00:19:34.113 slat (nsec): min=5757, max=70110, avg=14019.56, stdev=7697.96 00:19:34.113 clat (usec): min=6144, max=10210, avg=9034.15, stdev=170.65 00:19:34.113 lat (usec): min=6154, max=10280, avg=9048.17, stdev=171.95 00:19:34.113 clat percentiles (usec): 00:19:34.113 | 1.00th=[ 8717], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 8979], 00:19:34.113 | 30.00th=[ 8979], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9110], 00:19:34.113 | 70.00th=[ 9110], 80.00th=[ 9110], 90.00th=[ 9110], 95.00th=[ 9110], 00:19:34.113 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[10159], 99.95th=[10159], 00:19:34.113 | 99.99th=[10159] 00:19:34.113 bw ( KiB/s): min=42240, max=43008, per=33.36%, avg=42410.67, stdev=338.66, samples=9 00:19:34.113 iops : min= 330, max= 336, avg=331.33, stdev= 2.65, samples=9 00:19:34.113 lat (msec) : 10=99.64%, 20=0.36% 00:19:34.113 cpu : usr=93.85%, sys=5.73%, ctx=4, majf=0, minf=0 00:19:34.113 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.113 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.113 issued rwts: total=1656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.113 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.113 00:19:34.113 Run status group 0 (all jobs): 00:19:34.113 READ: bw=124MiB/s (130MB/s), 41.3MiB/s-41.5MiB/s (43.4MB/s-43.5MB/s), io=622MiB (652MB), run=5006-5008msec 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 bdev_null0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 [2024-12-09 11:02:26.550712] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.113 bdev_null1 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.113 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 bdev_null2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:34.114 { 00:19:34.114 "params": { 00:19:34.114 "name": "Nvme$subsystem", 00:19:34.114 "trtype": "$TEST_TRANSPORT", 00:19:34.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.114 "adrfam": "ipv4", 00:19:34.114 "trsvcid": "$NVMF_PORT", 00:19:34.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.114 "hdgst": ${hdgst:-false}, 00:19:34.114 "ddgst": ${ddgst:-false} 00:19:34.114 }, 00:19:34.114 "method": "bdev_nvme_attach_controller" 00:19:34.114 } 00:19:34.114 EOF 00:19:34.114 )") 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:34.114 { 00:19:34.114 "params": { 00:19:34.114 "name": "Nvme$subsystem", 00:19:34.114 "trtype": "$TEST_TRANSPORT", 00:19:34.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.114 "adrfam": "ipv4", 00:19:34.114 "trsvcid": "$NVMF_PORT", 00:19:34.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.114 "hdgst": ${hdgst:-false}, 00:19:34.114 "ddgst": ${ddgst:-false} 00:19:34.114 }, 00:19:34.114 "method": "bdev_nvme_attach_controller" 00:19:34.114 } 00:19:34.114 EOF 00:19:34.114 )") 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:34.114 { 00:19:34.114 "params": { 00:19:34.114 "name": "Nvme$subsystem", 00:19:34.114 "trtype": "$TEST_TRANSPORT", 00:19:34.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.114 "adrfam": "ipv4", 00:19:34.114 "trsvcid": "$NVMF_PORT", 00:19:34.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.114 "hdgst": ${hdgst:-false}, 00:19:34.114 "ddgst": ${ddgst:-false} 00:19:34.114 }, 00:19:34.114 "method": "bdev_nvme_attach_controller" 00:19:34.114 } 00:19:34.114 EOF 00:19:34.114 )") 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:34.114 "params": { 00:19:34.114 "name": "Nvme0", 00:19:34.114 "trtype": "tcp", 00:19:34.114 "traddr": "10.0.0.3", 00:19:34.114 "adrfam": "ipv4", 00:19:34.114 "trsvcid": "4420", 00:19:34.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:34.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:34.114 "hdgst": false, 00:19:34.114 "ddgst": false 00:19:34.114 }, 00:19:34.114 "method": "bdev_nvme_attach_controller" 00:19:34.114 },{ 00:19:34.114 "params": { 00:19:34.114 "name": "Nvme1", 00:19:34.114 "trtype": "tcp", 00:19:34.114 "traddr": "10.0.0.3", 00:19:34.114 "adrfam": "ipv4", 00:19:34.114 "trsvcid": "4420", 00:19:34.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.114 "hdgst": false, 00:19:34.114 "ddgst": false 00:19:34.114 }, 00:19:34.114 "method": "bdev_nvme_attach_controller" 00:19:34.114 },{ 00:19:34.114 "params": { 00:19:34.114 "name": "Nvme2", 00:19:34.114 "trtype": "tcp", 00:19:34.114 "traddr": "10.0.0.3", 00:19:34.114 "adrfam": "ipv4", 00:19:34.114 "trsvcid": "4420", 00:19:34.114 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.114 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:34.114 "hdgst": false, 00:19:34.114 "ddgst": false 00:19:34.114 }, 00:19:34.114 "method": "bdev_nvme_attach_controller" 00:19:34.114 }' 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:34.114 11:02:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.114 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:34.114 ... 00:19:34.114 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:34.114 ... 00:19:34.115 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:34.115 ... 00:19:34.115 fio-3.35 00:19:34.115 Starting 24 threads 00:19:46.335 00:19:46.335 filename0: (groupid=0, jobs=1): err= 0: pid=83415: Mon Dec 9 11:02:37 2024 00:19:46.335 read: IOPS=299, BW=1200KiB/s (1228kB/s)(11.7MiB/10010msec) 00:19:46.335 slat (usec): min=3, max=8038, avg=47.39, stdev=464.03 00:19:46.335 clat (usec): min=16758, max=99663, avg=53158.47, stdev=14671.72 00:19:46.335 lat (usec): min=16767, max=99700, avg=53205.86, stdev=14682.47 00:19:46.335 clat percentiles (msec): 00:19:46.335 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:19:46.335 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:19:46.335 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 72], 95.00th=[ 84], 00:19:46.335 | 99.00th=[ 90], 99.50th=[ 95], 99.90th=[ 100], 99.95th=[ 101], 00:19:46.335 | 99.99th=[ 101] 00:19:46.335 bw ( KiB/s): min= 912, max= 1536, per=4.21%, avg=1195.79, stdev=146.87, samples=19 00:19:46.335 iops : min= 228, max= 384, avg=298.95, stdev=36.72, samples=19 00:19:46.335 lat (msec) : 20=0.10%, 50=46.94%, 100=52.96% 00:19:46.335 cpu : usr=35.48%, sys=0.87%, ctx=1064, majf=0, minf=9 00:19:46.335 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:46.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 issued rwts: total=3002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.335 filename0: (groupid=0, jobs=1): err= 0: pid=83416: Mon Dec 9 11:02:37 2024 00:19:46.335 read: IOPS=285, BW=1144KiB/s (1171kB/s)(11.2MiB/10035msec) 00:19:46.335 slat (usec): min=2, max=8054, avg=45.02, stdev=385.23 00:19:46.335 clat (msec): min=15, max=107, avg=55.69, stdev=15.45 00:19:46.335 lat (msec): min=15, max=107, avg=55.73, stdev=15.45 00:19:46.335 clat percentiles (msec): 00:19:46.335 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 47], 00:19:46.335 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 56], 60.00th=[ 59], 00:19:46.335 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 85], 00:19:46.335 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 108], 00:19:46.335 | 99.99th=[ 108] 00:19:46.335 bw ( KiB/s): min= 832, max= 1666, per=4.02%, avg=1140.50, stdev=183.29, samples=20 00:19:46.335 iops : min= 208, max= 416, avg=285.10, stdev=45.75, samples=20 00:19:46.335 lat (msec) : 20=0.70%, 50=40.29%, 100=58.80%, 250=0.21% 00:19:46.335 cpu : usr=37.31%, sys=0.58%, ctx=1037, majf=0, minf=9 00:19:46.335 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=77.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:46.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 complete : 0=0.0%, 4=89.1%, 8=9.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.335 filename0: (groupid=0, jobs=1): err= 0: pid=83417: Mon Dec 9 11:02:37 2024 00:19:46.335 read: IOPS=301, BW=1207KiB/s (1236kB/s)(11.8MiB/10007msec) 00:19:46.335 slat (usec): min=3, max=8042, avg=43.60, stdev=321.79 00:19:46.335 clat (msec): min=9, max=103, avg=52.85, stdev=15.38 00:19:46.335 lat (msec): min=9, max=103, avg=52.89, stdev=15.38 00:19:46.335 clat percentiles (msec): 00:19:46.335 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:19:46.335 | 30.00th=[ 45], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 56], 00:19:46.335 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 75], 95.00th=[ 83], 00:19:46.335 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 102], 99.95th=[ 102], 00:19:46.335 | 99.99th=[ 105] 00:19:46.335 bw ( KiB/s): min= 896, max= 1408, per=4.20%, avg=1192.84, stdev=156.79, samples=19 00:19:46.335 iops : min= 224, max= 352, avg=298.21, stdev=39.20, samples=19 00:19:46.335 lat (msec) : 10=0.43%, 20=1.09%, 50=42.80%, 100=55.55%, 250=0.13% 00:19:46.335 cpu : usr=44.79%, sys=0.97%, ctx=1495, majf=0, minf=9 00:19:46.335 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=79.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:46.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 complete : 0=0.0%, 4=88.2%, 8=10.8%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 issued rwts: total=3019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.335 filename0: (groupid=0, jobs=1): err= 0: pid=83418: Mon Dec 9 11:02:37 2024 00:19:46.335 read: IOPS=300, BW=1200KiB/s (1229kB/s)(11.8MiB/10033msec) 00:19:46.335 slat (usec): min=5, max=8028, avg=28.51, stdev=258.33 00:19:46.335 clat (msec): min=8, max=103, avg=53.15, stdev=16.86 00:19:46.335 lat (msec): min=8, max=103, avg=53.18, stdev=16.86 00:19:46.335 clat percentiles (msec): 00:19:46.335 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 35], 20.00th=[ 39], 00:19:46.335 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 53], 60.00th=[ 57], 00:19:46.335 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 78], 95.00th=[ 85], 00:19:46.335 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 96], 99.95th=[ 96], 00:19:46.335 | 99.99th=[ 105] 00:19:46.335 bw ( KiB/s): min= 832, max= 2198, per=4.22%, avg=1198.30, stdev=270.58, samples=20 00:19:46.335 iops : min= 208, max= 549, avg=299.55, stdev=67.55, samples=20 00:19:46.335 lat (msec) : 10=1.00%, 20=1.66%, 50=42.46%, 100=54.85%, 250=0.03% 00:19:46.335 cpu : usr=40.46%, sys=0.84%, ctx=1243, majf=0, minf=9 00:19:46.335 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.0%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:46.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 issued rwts: total=3010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.335 filename0: (groupid=0, jobs=1): err= 0: pid=83419: Mon Dec 9 11:02:37 2024 00:19:46.335 read: IOPS=307, BW=1230KiB/s (1260kB/s)(12.1MiB/10029msec) 00:19:46.335 slat (usec): min=5, max=8043, avg=47.38, stdev=332.65 00:19:46.335 clat (msec): min=10, max=114, avg=51.77, stdev=16.13 00:19:46.335 lat (msec): min=10, max=114, avg=51.82, stdev=16.13 00:19:46.335 clat percentiles (msec): 00:19:46.335 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 39], 00:19:46.335 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 55], 00:19:46.335 | 70.00th=[ 58], 80.00th=[ 63], 90.00th=[ 74], 95.00th=[ 83], 00:19:46.335 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 112], 00:19:46.335 | 99.99th=[ 115] 00:19:46.335 bw ( KiB/s): min= 832, max= 2036, per=4.32%, avg=1227.80, stdev=238.39, samples=20 00:19:46.335 iops : min= 208, max= 509, avg=306.95, stdev=59.60, samples=20 00:19:46.335 lat (msec) : 20=2.24%, 50=43.95%, 100=53.68%, 250=0.13% 00:19:46.335 cpu : usr=44.81%, sys=0.95%, ctx=1514, majf=0, minf=9 00:19:46.335 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.7%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:46.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.335 issued rwts: total=3085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.335 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.335 filename0: (groupid=0, jobs=1): err= 0: pid=83420: Mon Dec 9 11:02:37 2024 00:19:46.335 read: IOPS=292, BW=1169KiB/s (1197kB/s)(11.4MiB/10027msec) 00:19:46.336 slat (usec): min=3, max=11052, avg=44.30, stdev=422.68 00:19:46.336 clat (msec): min=19, max=116, avg=54.52, stdev=16.00 00:19:46.336 lat (msec): min=19, max=116, avg=54.57, stdev=15.99 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 23], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 42], 00:19:46.336 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 58], 00:19:46.336 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 81], 95.00th=[ 85], 00:19:46.336 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 109], 00:19:46.336 | 99.99th=[ 116] 00:19:46.336 bw ( KiB/s): min= 784, max= 1747, per=4.12%, avg=1168.05, stdev=204.45, samples=20 00:19:46.336 iops : min= 196, max= 436, avg=291.95, stdev=50.99, samples=20 00:19:46.336 lat (msec) : 20=0.55%, 50=43.67%, 100=55.54%, 250=0.24% 00:19:46.336 cpu : usr=36.91%, sys=0.54%, ctx=1087, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=0.3%, 4=1.6%, 8=81.0%, 16=17.1%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=88.4%, 8=11.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=2931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename0: (groupid=0, jobs=1): err= 0: pid=83421: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=291, BW=1165KiB/s (1193kB/s)(11.4MiB/10018msec) 00:19:46.336 slat (usec): min=4, max=8046, avg=27.97, stdev=274.86 00:19:46.336 clat (msec): min=11, max=101, avg=54.77, stdev=15.72 00:19:46.336 lat (msec): min=11, max=101, avg=54.80, stdev=15.72 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 23], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 44], 00:19:46.336 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 57], 00:19:46.336 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 81], 95.00th=[ 84], 00:19:46.336 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 100], 00:19:46.336 | 99.99th=[ 103] 00:19:46.336 bw ( KiB/s): min= 864, max= 1744, per=4.11%, avg=1165.05, stdev=196.47, samples=20 00:19:46.336 iops : min= 216, max= 436, avg=291.25, stdev=49.11, samples=20 00:19:46.336 lat (msec) : 20=0.62%, 50=40.83%, 100=58.52%, 250=0.03% 00:19:46.336 cpu : usr=38.24%, sys=0.84%, ctx=1121, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=77.4%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=89.1%, 8=9.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=2917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename0: (groupid=0, jobs=1): err= 0: pid=83422: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=294, BW=1179KiB/s (1207kB/s)(11.6MiB/10041msec) 00:19:46.336 slat (usec): min=5, max=9031, avg=20.71, stdev=195.29 00:19:46.336 clat (msec): min=6, max=108, avg=54.14, stdev=17.28 00:19:46.336 lat (msec): min=6, max=108, avg=54.16, stdev=17.28 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 41], 00:19:46.336 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 58], 00:19:46.336 | 70.00th=[ 61], 80.00th=[ 69], 90.00th=[ 79], 95.00th=[ 85], 00:19:46.336 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 109], 00:19:46.336 | 99.99th=[ 109] 00:19:46.336 bw ( KiB/s): min= 784, max= 2222, per=4.14%, avg=1175.85, stdev=286.29, samples=20 00:19:46.336 iops : min= 196, max= 555, avg=293.90, stdev=71.45, samples=20 00:19:46.336 lat (msec) : 10=1.08%, 20=2.16%, 50=40.51%, 100=56.01%, 250=0.24% 00:19:46.336 cpu : usr=36.76%, sys=0.88%, ctx=1167, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=0.5%, 4=1.7%, 8=80.6%, 16=17.1%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=88.5%, 8=11.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename1: (groupid=0, jobs=1): err= 0: pid=83423: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=284, BW=1140KiB/s (1167kB/s)(11.2MiB/10033msec) 00:19:46.336 slat (usec): min=5, max=8038, avg=34.10, stdev=289.23 00:19:46.336 clat (msec): min=6, max=111, avg=55.92, stdev=17.47 00:19:46.336 lat (msec): min=6, max=111, avg=55.96, stdev=17.48 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 8], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 47], 00:19:46.336 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 60], 00:19:46.336 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 82], 95.00th=[ 85], 00:19:46.336 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 111], 99.95th=[ 111], 00:19:46.336 | 99.99th=[ 111] 00:19:46.336 bw ( KiB/s): min= 808, max= 2158, per=4.01%, avg=1138.40, stdev=276.88, samples=20 00:19:46.336 iops : min= 202, max= 539, avg=284.55, stdev=69.13, samples=20 00:19:46.336 lat (msec) : 10=1.12%, 20=2.34%, 50=35.68%, 100=60.62%, 250=0.24% 00:19:46.336 cpu : usr=38.22%, sys=0.61%, ctx=1350, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=1.7%, 4=6.9%, 8=75.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=89.9%, 8=8.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=2859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename1: (groupid=0, jobs=1): err= 0: pid=83424: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=302, BW=1208KiB/s (1237kB/s)(11.8MiB/10008msec) 00:19:46.336 slat (usec): min=3, max=8042, avg=38.71, stdev=325.53 00:19:46.336 clat (msec): min=14, max=107, avg=52.78, stdev=15.06 00:19:46.336 lat (msec): min=14, max=107, avg=52.82, stdev=15.06 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 39], 00:19:46.336 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 00:19:46.336 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 73], 95.00th=[ 84], 00:19:46.336 | 99.00th=[ 91], 99.50th=[ 95], 99.90th=[ 96], 99.95th=[ 96], 00:19:46.336 | 99.99th=[ 108] 00:19:46.336 bw ( KiB/s): min= 896, max= 1376, per=4.23%, avg=1201.26, stdev=133.53, samples=19 00:19:46.336 iops : min= 224, max= 344, avg=300.32, stdev=33.38, samples=19 00:19:46.336 lat (msec) : 20=0.20%, 50=47.34%, 100=52.43%, 250=0.03% 00:19:46.336 cpu : usr=39.07%, sys=0.60%, ctx=1177, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=3023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename1: (groupid=0, jobs=1): err= 0: pid=83425: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=300, BW=1201KiB/s (1230kB/s)(11.8MiB/10020msec) 00:19:46.336 slat (usec): min=3, max=8032, avg=33.55, stdev=326.74 00:19:46.336 clat (msec): min=14, max=108, avg=53.11, stdev=15.70 00:19:46.336 lat (msec): min=14, max=108, avg=53.15, stdev=15.71 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 40], 00:19:46.336 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 57], 00:19:46.336 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 75], 95.00th=[ 84], 00:19:46.336 | 99.00th=[ 94], 99.50th=[ 108], 99.90th=[ 109], 99.95th=[ 109], 00:19:46.336 | 99.99th=[ 109] 00:19:46.336 bw ( KiB/s): min= 768, max= 1768, per=4.23%, avg=1199.20, stdev=188.31, samples=20 00:19:46.336 iops : min= 192, max= 442, avg=299.80, stdev=47.08, samples=20 00:19:46.336 lat (msec) : 20=0.53%, 50=46.89%, 100=52.01%, 250=0.56% 00:19:46.336 cpu : usr=39.17%, sys=0.97%, ctx=1227, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.3%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=88.2%, 8=11.2%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=3009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename1: (groupid=0, jobs=1): err= 0: pid=83426: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=297, BW=1191KiB/s (1220kB/s)(11.6MiB/10011msec) 00:19:46.336 slat (usec): min=2, max=8041, avg=26.63, stdev=198.61 00:19:46.336 clat (msec): min=15, max=107, avg=53.57, stdev=15.18 00:19:46.336 lat (msec): min=15, max=107, avg=53.60, stdev=15.18 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:19:46.336 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 52], 60.00th=[ 58], 00:19:46.336 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 73], 95.00th=[ 84], 00:19:46.336 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 99], 99.95th=[ 99], 00:19:46.336 | 99.99th=[ 108] 00:19:46.336 bw ( KiB/s): min= 888, max= 1536, per=4.16%, avg=1181.47, stdev=155.22, samples=19 00:19:46.336 iops : min= 222, max= 384, avg=295.37, stdev=38.80, samples=19 00:19:46.336 lat (msec) : 20=0.60%, 50=45.54%, 100=53.82%, 250=0.03% 00:19:46.336 cpu : usr=36.23%, sys=0.64%, ctx=1142, majf=0, minf=9 00:19:46.336 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:46.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.336 issued rwts: total=2982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.336 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.336 filename1: (groupid=0, jobs=1): err= 0: pid=83427: Mon Dec 9 11:02:37 2024 00:19:46.336 read: IOPS=298, BW=1194KiB/s (1222kB/s)(11.7MiB/10030msec) 00:19:46.336 slat (usec): min=4, max=8027, avg=32.19, stdev=234.59 00:19:46.336 clat (msec): min=7, max=111, avg=53.46, stdev=15.72 00:19:46.336 lat (msec): min=7, max=111, avg=53.49, stdev=15.72 00:19:46.336 clat percentiles (msec): 00:19:46.336 | 1.00th=[ 15], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 41], 00:19:46.336 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 57], 00:19:46.336 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 74], 95.00th=[ 83], 00:19:46.336 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 109], 99.95th=[ 112], 00:19:46.336 | 99.99th=[ 112] 00:19:46.336 bw ( KiB/s): min= 888, max= 1781, per=4.19%, avg=1190.65, stdev=194.80, samples=20 00:19:46.336 iops : min= 222, max= 445, avg=297.65, stdev=48.66, samples=20 00:19:46.336 lat (msec) : 10=0.53%, 20=1.14%, 50=41.13%, 100=57.03%, 250=0.17% 00:19:46.336 cpu : usr=42.62%, sys=0.75%, ctx=1298, majf=0, minf=0 00:19:46.336 IO depths : 1=0.2%, 2=0.8%, 4=2.4%, 8=80.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.2%, 8=11.3%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=2993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename1: (groupid=0, jobs=1): err= 0: pid=83428: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=296, BW=1184KiB/s (1213kB/s)(11.6MiB/10033msec) 00:19:46.337 slat (usec): min=5, max=7033, avg=27.01, stdev=190.56 00:19:46.337 clat (msec): min=2, max=119, avg=53.86, stdev=16.33 00:19:46.337 lat (msec): min=2, max=119, avg=53.89, stdev=16.33 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 41], 00:19:46.337 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 58], 00:19:46.337 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 77], 95.00th=[ 85], 00:19:46.337 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 105], 00:19:46.337 | 99.99th=[ 121] 00:19:46.337 bw ( KiB/s): min= 816, max= 1920, per=4.17%, avg=1183.25, stdev=220.61, samples=20 00:19:46.337 iops : min= 204, max= 480, avg=295.80, stdev=55.15, samples=20 00:19:46.337 lat (msec) : 4=0.07%, 10=1.01%, 20=1.21%, 50=39.62%, 100=57.96% 00:19:46.337 lat (msec) : 250=0.13% 00:19:46.337 cpu : usr=39.25%, sys=0.67%, ctx=1230, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=80.3%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=2971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename1: (groupid=0, jobs=1): err= 0: pid=83429: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=299, BW=1198KiB/s (1227kB/s)(11.7MiB/10017msec) 00:19:46.337 slat (usec): min=3, max=8049, avg=40.54, stdev=349.56 00:19:46.337 clat (msec): min=18, max=106, avg=53.20, stdev=14.81 00:19:46.337 lat (msec): min=18, max=106, avg=53.24, stdev=14.80 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:19:46.337 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 57], 00:19:46.337 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 84], 00:19:46.337 | 99.00th=[ 94], 99.50th=[ 95], 99.90th=[ 97], 99.95th=[ 101], 00:19:46.337 | 99.99th=[ 107] 00:19:46.337 bw ( KiB/s): min= 872, max= 1552, per=4.19%, avg=1190.32, stdev=162.33, samples=19 00:19:46.337 iops : min= 218, max= 388, avg=297.58, stdev=40.58, samples=19 00:19:46.337 lat (msec) : 20=0.47%, 50=47.13%, 100=52.33%, 250=0.07% 00:19:46.337 cpu : usr=37.19%, sys=0.50%, ctx=1048, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=3000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename1: (groupid=0, jobs=1): err= 0: pid=83430: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=299, BW=1199KiB/s (1227kB/s)(11.7MiB/10002msec) 00:19:46.337 slat (usec): min=2, max=8041, avg=42.62, stdev=312.80 00:19:46.337 clat (usec): min=1828, max=104374, avg=53191.29, stdev=16434.00 00:19:46.337 lat (usec): min=1834, max=104383, avg=53233.91, stdev=16434.10 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 40], 00:19:46.337 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 52], 60.00th=[ 57], 00:19:46.337 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 78], 95.00th=[ 84], 00:19:46.337 | 99.00th=[ 95], 99.50th=[ 95], 99.90th=[ 104], 99.95th=[ 104], 00:19:46.337 | 99.99th=[ 105] 00:19:46.337 bw ( KiB/s): min= 864, max= 1352, per=4.13%, avg=1173.89, stdev=133.52, samples=19 00:19:46.337 iops : min= 216, max= 338, avg=293.47, stdev=33.38, samples=19 00:19:46.337 lat (msec) : 2=0.20%, 4=1.20%, 10=0.17%, 20=0.87%, 50=43.74% 00:19:46.337 lat (msec) : 100=53.69%, 250=0.13% 00:19:46.337 cpu : usr=39.25%, sys=0.56%, ctx=1263, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=80.6%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.1%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=2997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename2: (groupid=0, jobs=1): err= 0: pid=83431: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=299, BW=1200KiB/s (1228kB/s)(11.7MiB/10007msec) 00:19:46.337 slat (usec): min=3, max=8036, avg=31.58, stdev=291.54 00:19:46.337 clat (msec): min=7, max=108, avg=53.20, stdev=15.35 00:19:46.337 lat (msec): min=8, max=108, avg=53.23, stdev=15.34 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 00:19:46.337 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 52], 60.00th=[ 58], 00:19:46.337 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 74], 95.00th=[ 84], 00:19:46.337 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 100], 99.95th=[ 109], 00:19:46.337 | 99.99th=[ 109] 00:19:46.337 bw ( KiB/s): min= 896, max= 1376, per=4.18%, avg=1187.05, stdev=142.44, samples=19 00:19:46.337 iops : min= 224, max= 344, avg=296.74, stdev=35.59, samples=19 00:19:46.337 lat (msec) : 10=0.33%, 20=0.20%, 50=48.08%, 100=51.32%, 250=0.07% 00:19:46.337 cpu : usr=36.08%, sys=0.80%, ctx=1178, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=3001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename2: (groupid=0, jobs=1): err= 0: pid=83432: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=298, BW=1193KiB/s (1222kB/s)(11.7MiB/10026msec) 00:19:46.337 slat (usec): min=5, max=4050, avg=29.91, stdev=203.04 00:19:46.337 clat (msec): min=12, max=106, avg=53.48, stdev=16.13 00:19:46.337 lat (msec): min=12, max=106, avg=53.51, stdev=16.12 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 40], 00:19:46.337 | 30.00th=[ 46], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 56], 00:19:46.337 | 70.00th=[ 60], 80.00th=[ 65], 90.00th=[ 80], 95.00th=[ 85], 00:19:46.337 | 99.00th=[ 93], 99.50th=[ 97], 99.90th=[ 105], 99.95th=[ 106], 00:19:46.337 | 99.99th=[ 107] 00:19:46.337 bw ( KiB/s): min= 784, max= 1672, per=4.20%, avg=1192.00, stdev=197.94, samples=20 00:19:46.337 iops : min= 196, max= 418, avg=298.00, stdev=49.49, samples=20 00:19:46.337 lat (msec) : 20=1.20%, 50=41.74%, 100=56.89%, 250=0.17% 00:19:46.337 cpu : usr=42.96%, sys=0.89%, ctx=1226, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=2990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename2: (groupid=0, jobs=1): err= 0: pid=83433: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=286, BW=1146KiB/s (1173kB/s)(11.2MiB/10010msec) 00:19:46.337 slat (usec): min=2, max=8057, avg=32.88, stdev=318.40 00:19:46.337 clat (msec): min=20, max=106, avg=55.65, stdev=15.45 00:19:46.337 lat (msec): min=20, max=106, avg=55.68, stdev=15.45 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 45], 00:19:46.337 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 55], 60.00th=[ 58], 00:19:46.337 | 70.00th=[ 61], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 87], 00:19:46.337 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 105], 99.95th=[ 107], 00:19:46.337 | 99.99th=[ 107] 00:19:46.337 bw ( KiB/s): min= 888, max= 1539, per=4.01%, avg=1137.00, stdev=164.10, samples=19 00:19:46.337 iops : min= 222, max= 384, avg=284.21, stdev=40.92, samples=19 00:19:46.337 lat (msec) : 50=38.47%, 100=61.39%, 250=0.14% 00:19:46.337 cpu : usr=38.33%, sys=0.82%, ctx=1139, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=2.1%, 4=8.9%, 8=73.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=89.8%, 8=8.2%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=2867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename2: (groupid=0, jobs=1): err= 0: pid=83434: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=294, BW=1179KiB/s (1207kB/s)(11.5MiB/10019msec) 00:19:46.337 slat (usec): min=2, max=8036, avg=34.48, stdev=321.51 00:19:46.337 clat (msec): min=21, max=105, avg=54.14, stdev=15.09 00:19:46.337 lat (msec): min=21, max=105, avg=54.18, stdev=15.09 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 42], 00:19:46.337 | 30.00th=[ 48], 40.00th=[ 49], 50.00th=[ 54], 60.00th=[ 57], 00:19:46.337 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 84], 00:19:46.337 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 105], 99.95th=[ 105], 00:19:46.337 | 99.99th=[ 106] 00:19:46.337 bw ( KiB/s): min= 864, max= 1408, per=4.11%, avg=1167.58, stdev=138.45, samples=19 00:19:46.337 iops : min= 216, max= 352, avg=291.89, stdev=34.61, samples=19 00:19:46.337 lat (msec) : 50=45.53%, 100=54.37%, 250=0.10% 00:19:46.337 cpu : usr=38.56%, sys=0.87%, ctx=1072, majf=0, minf=9 00:19:46.337 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:46.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.337 issued rwts: total=2952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.337 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.337 filename2: (groupid=0, jobs=1): err= 0: pid=83435: Mon Dec 9 11:02:37 2024 00:19:46.337 read: IOPS=286, BW=1147KiB/s (1175kB/s)(11.2MiB/10030msec) 00:19:46.337 slat (usec): min=5, max=8023, avg=18.53, stdev=149.66 00:19:46.337 clat (msec): min=6, max=110, avg=55.67, stdev=17.11 00:19:46.337 lat (msec): min=6, max=110, avg=55.69, stdev=17.10 00:19:46.337 clat percentiles (msec): 00:19:46.337 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 46], 00:19:46.337 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:19:46.337 | 70.00th=[ 61], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 85], 00:19:46.337 | 99.00th=[ 96], 99.50th=[ 96], 99.90th=[ 108], 99.95th=[ 110], 00:19:46.337 | 99.99th=[ 111] 00:19:46.337 bw ( KiB/s): min= 816, max= 2031, per=4.03%, avg=1144.05, stdev=248.51, samples=20 00:19:46.338 iops : min= 204, max= 507, avg=285.95, stdev=62.01, samples=20 00:19:46.338 lat (msec) : 10=1.11%, 20=1.67%, 50=38.62%, 100=58.46%, 250=0.14% 00:19:46.338 cpu : usr=33.35%, sys=0.92%, ctx=870, majf=0, minf=9 00:19:46.338 IO depths : 1=0.2%, 2=1.0%, 4=3.7%, 8=78.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:46.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 complete : 0=0.0%, 4=89.1%, 8=10.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 issued rwts: total=2877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.338 filename2: (groupid=0, jobs=1): err= 0: pid=83436: Mon Dec 9 11:02:37 2024 00:19:46.338 read: IOPS=298, BW=1192KiB/s (1221kB/s)(11.7MiB/10023msec) 00:19:46.338 slat (usec): min=2, max=8043, avg=30.58, stdev=327.94 00:19:46.338 clat (msec): min=15, max=119, avg=53.54, stdev=14.93 00:19:46.338 lat (msec): min=15, max=119, avg=53.57, stdev=14.93 00:19:46.338 clat percentiles (msec): 00:19:46.338 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 36], 20.00th=[ 40], 00:19:46.338 | 30.00th=[ 48], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 58], 00:19:46.338 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 72], 95.00th=[ 84], 00:19:46.338 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 108], 00:19:46.338 | 99.99th=[ 121] 00:19:46.338 bw ( KiB/s): min= 864, max= 1408, per=4.19%, avg=1188.40, stdev=145.18, samples=20 00:19:46.338 iops : min= 216, max= 352, avg=297.10, stdev=36.29, samples=20 00:19:46.338 lat (msec) : 20=0.07%, 50=48.28%, 100=51.59%, 250=0.07% 00:19:46.338 cpu : usr=33.48%, sys=0.75%, ctx=879, majf=0, minf=9 00:19:46.338 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:46.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 issued rwts: total=2987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.338 filename2: (groupid=0, jobs=1): err= 0: pid=83437: Mon Dec 9 11:02:37 2024 00:19:46.338 read: IOPS=288, BW=1152KiB/s (1180kB/s)(11.3MiB/10046msec) 00:19:46.338 slat (usec): min=5, max=8047, avg=29.70, stdev=236.07 00:19:46.338 clat (msec): min=7, max=109, avg=55.37, stdev=16.02 00:19:46.338 lat (msec): min=7, max=109, avg=55.40, stdev=16.01 00:19:46.338 clat percentiles (msec): 00:19:46.338 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 46], 00:19:46.338 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:19:46.338 | 70.00th=[ 61], 80.00th=[ 66], 90.00th=[ 79], 95.00th=[ 85], 00:19:46.338 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 108], 99.95th=[ 108], 00:19:46.338 | 99.99th=[ 110] 00:19:46.338 bw ( KiB/s): min= 816, max= 1777, per=4.05%, avg=1149.75, stdev=194.07, samples=20 00:19:46.338 iops : min= 204, max= 444, avg=287.40, stdev=48.48, samples=20 00:19:46.338 lat (msec) : 10=0.55%, 20=2.14%, 50=36.73%, 100=59.95%, 250=0.62% 00:19:46.338 cpu : usr=36.41%, sys=0.53%, ctx=1159, majf=0, minf=9 00:19:46.338 IO depths : 1=0.2%, 2=1.1%, 4=3.7%, 8=78.5%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:46.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 complete : 0=0.0%, 4=89.0%, 8=10.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 issued rwts: total=2894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.338 filename2: (groupid=0, jobs=1): err= 0: pid=83438: Mon Dec 9 11:02:37 2024 00:19:46.338 read: IOPS=307, BW=1229KiB/s (1259kB/s)(12.0MiB/10003msec) 00:19:46.338 slat (usec): min=2, max=8045, avg=33.17, stdev=259.92 00:19:46.338 clat (msec): min=2, max=112, avg=51.92, stdev=15.96 00:19:46.338 lat (msec): min=2, max=112, avg=51.95, stdev=15.96 00:19:46.338 clat percentiles (msec): 00:19:46.338 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 35], 20.00th=[ 39], 00:19:46.338 | 30.00th=[ 44], 40.00th=[ 48], 50.00th=[ 51], 60.00th=[ 56], 00:19:46.338 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 73], 95.00th=[ 82], 00:19:46.338 | 99.00th=[ 93], 99.50th=[ 97], 99.90th=[ 102], 99.95th=[ 105], 00:19:46.338 | 99.99th=[ 113] 00:19:46.338 bw ( KiB/s): min= 872, max= 1384, per=4.25%, avg=1206.00, stdev=140.90, samples=19 00:19:46.338 iops : min= 218, max= 346, avg=301.47, stdev=35.28, samples=19 00:19:46.338 lat (msec) : 4=0.94%, 10=0.33%, 20=0.29%, 50=48.41%, 100=49.67% 00:19:46.338 lat (msec) : 250=0.36% 00:19:46.338 cpu : usr=38.20%, sys=0.83%, ctx=1346, majf=0, minf=9 00:19:46.338 IO depths : 1=0.1%, 2=0.6%, 4=2.0%, 8=81.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:19:46.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:46.338 issued rwts: total=3074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:46.338 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:46.338 00:19:46.338 Run status group 0 (all jobs): 00:19:46.338 READ: bw=27.7MiB/s (29.1MB/s), 1140KiB/s-1230KiB/s (1167kB/s-1260kB/s), io=278MiB (292MB), run=10002-10046msec 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 bdev_null0 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.338 [2024-12-09 11:02:38.021581] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:46.338 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.339 bdev_null1 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:46.339 { 00:19:46.339 "params": { 00:19:46.339 "name": "Nvme$subsystem", 00:19:46.339 "trtype": "$TEST_TRANSPORT", 00:19:46.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.339 "adrfam": "ipv4", 00:19:46.339 "trsvcid": "$NVMF_PORT", 00:19:46.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.339 "hdgst": ${hdgst:-false}, 00:19:46.339 "ddgst": ${ddgst:-false} 00:19:46.339 }, 00:19:46.339 "method": "bdev_nvme_attach_controller" 00:19:46.339 } 00:19:46.339 EOF 00:19:46.339 )") 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:46.339 { 00:19:46.339 "params": { 00:19:46.339 "name": "Nvme$subsystem", 00:19:46.339 "trtype": "$TEST_TRANSPORT", 00:19:46.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.339 "adrfam": "ipv4", 00:19:46.339 "trsvcid": "$NVMF_PORT", 00:19:46.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.339 "hdgst": ${hdgst:-false}, 00:19:46.339 "ddgst": ${ddgst:-false} 00:19:46.339 }, 00:19:46.339 "method": "bdev_nvme_attach_controller" 00:19:46.339 } 00:19:46.339 EOF 00:19:46.339 )") 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:46.339 "params": { 00:19:46.339 "name": "Nvme0", 00:19:46.339 "trtype": "tcp", 00:19:46.339 "traddr": "10.0.0.3", 00:19:46.339 "adrfam": "ipv4", 00:19:46.339 "trsvcid": "4420", 00:19:46.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:46.339 "hdgst": false, 00:19:46.339 "ddgst": false 00:19:46.339 }, 00:19:46.339 "method": "bdev_nvme_attach_controller" 00:19:46.339 },{ 00:19:46.339 "params": { 00:19:46.339 "name": "Nvme1", 00:19:46.339 "trtype": "tcp", 00:19:46.339 "traddr": "10.0.0.3", 00:19:46.339 "adrfam": "ipv4", 00:19:46.339 "trsvcid": "4420", 00:19:46.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.339 "hdgst": false, 00:19:46.339 "ddgst": false 00:19:46.339 }, 00:19:46.339 "method": "bdev_nvme_attach_controller" 00:19:46.339 }' 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.339 11:02:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.339 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:46.339 ... 00:19:46.339 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:46.339 ... 00:19:46.339 fio-3.35 00:19:46.339 Starting 4 threads 00:19:51.616 00:19:51.616 filename0: (groupid=0, jobs=1): err= 0: pid=83588: Mon Dec 9 11:02:43 2024 00:19:51.616 read: IOPS=3169, BW=24.8MiB/s (26.0MB/s)(124MiB/5001msec) 00:19:51.616 slat (nsec): min=5504, max=63506, avg=12853.44, stdev=8741.56 00:19:51.616 clat (usec): min=327, max=4930, avg=2492.44, stdev=712.23 00:19:51.616 lat (usec): min=338, max=4955, avg=2505.29, stdev=712.57 00:19:51.616 clat percentiles (usec): 00:19:51.616 | 1.00th=[ 1188], 5.00th=[ 1614], 10.00th=[ 1631], 20.00th=[ 1663], 00:19:51.616 | 30.00th=[ 1762], 40.00th=[ 2311], 50.00th=[ 2507], 60.00th=[ 2835], 00:19:51.616 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3556], 00:19:51.616 | 99.00th=[ 3785], 99.50th=[ 3884], 99.90th=[ 4228], 99.95th=[ 4490], 00:19:51.616 | 99.99th=[ 4817] 00:19:51.616 bw ( KiB/s): min=22896, max=27264, per=29.41%, avg=25470.22, stdev=1543.01, samples=9 00:19:51.616 iops : min= 2862, max= 3408, avg=3183.78, stdev=192.88, samples=9 00:19:51.616 lat (usec) : 500=0.01%, 750=0.35%, 1000=0.26% 00:19:51.616 lat (msec) : 2=34.08%, 4=65.02%, 10=0.28% 00:19:51.616 cpu : usr=95.10%, sys=4.18%, ctx=11, majf=0, minf=0 00:19:51.616 IO depths : 1=0.2%, 2=0.9%, 4=63.4%, 8=35.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 issued rwts: total=15850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:51.616 filename0: (groupid=0, jobs=1): err= 0: pid=83589: Mon Dec 9 11:02:43 2024 00:19:51.616 read: IOPS=2624, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:19:51.616 slat (nsec): min=5451, max=93490, avg=22906.13, stdev=12221.24 00:19:51.616 clat (usec): min=321, max=6046, avg=2975.51, stdev=547.70 00:19:51.616 lat (usec): min=330, max=6073, avg=2998.42, stdev=547.06 00:19:51.616 clat percentiles (usec): 00:19:51.616 | 1.00th=[ 1549], 5.00th=[ 1860], 10.00th=[ 2024], 20.00th=[ 2606], 00:19:51.616 | 30.00th=[ 3032], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:19:51.616 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3621], 00:19:51.616 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4555], 99.95th=[ 5997], 00:19:51.616 | 99.99th=[ 6063] 00:19:51.616 bw ( KiB/s): min=19712, max=22896, per=24.13%, avg=20898.22, stdev=970.99, samples=9 00:19:51.616 iops : min= 2464, max= 2862, avg=2612.22, stdev=121.32, samples=9 00:19:51.616 lat (usec) : 500=0.04%, 750=0.06%, 1000=0.27% 00:19:51.616 lat (msec) : 2=8.84%, 4=89.85%, 10=0.94% 00:19:51.616 cpu : usr=96.60%, sys=2.70%, ctx=4, majf=0, minf=0 00:19:51.616 IO depths : 1=0.5%, 2=14.1%, 4=56.2%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 issued rwts: total=13125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:51.616 filename1: (groupid=0, jobs=1): err= 0: pid=83590: Mon Dec 9 11:02:43 2024 00:19:51.616 read: IOPS=2578, BW=20.1MiB/s (21.1MB/s)(101MiB/5002msec) 00:19:51.616 slat (nsec): min=5246, max=93370, avg=22456.93, stdev=12220.52 00:19:51.616 clat (usec): min=598, max=5281, avg=3027.96, stdev=535.42 00:19:51.616 lat (usec): min=605, max=5288, avg=3050.41, stdev=534.71 00:19:51.616 clat percentiles (usec): 00:19:51.616 | 1.00th=[ 1696], 5.00th=[ 1909], 10.00th=[ 2040], 20.00th=[ 2769], 00:19:51.616 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163], 00:19:51.616 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3556], 95.00th=[ 3720], 00:19:51.616 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4817], 00:19:51.616 | 99.99th=[ 5080] 00:19:51.616 bw ( KiB/s): min=19488, max=22832, per=23.76%, avg=20575.44, stdev=1073.93, samples=9 00:19:51.616 iops : min= 2436, max= 2854, avg=2571.89, stdev=134.19, samples=9 00:19:51.616 lat (usec) : 750=0.02%, 1000=0.04% 00:19:51.616 lat (msec) : 2=8.69%, 4=89.54%, 10=1.71% 00:19:51.616 cpu : usr=96.58%, sys=2.72%, ctx=10, majf=0, minf=0 00:19:51.616 IO depths : 1=0.6%, 2=15.6%, 4=55.3%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 issued rwts: total=12900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:51.616 filename1: (groupid=0, jobs=1): err= 0: pid=83591: Mon Dec 9 11:02:43 2024 00:19:51.616 read: IOPS=2452, BW=19.2MiB/s (20.1MB/s)(95.8MiB/5002msec) 00:19:51.616 slat (nsec): min=5518, max=63626, avg=11995.97, stdev=9023.88 00:19:51.616 clat (usec): min=757, max=4882, avg=3218.87, stdev=484.53 00:19:51.616 lat (usec): min=767, max=4894, avg=3230.87, stdev=484.26 00:19:51.616 clat percentiles (usec): 00:19:51.616 | 1.00th=[ 1582], 5.00th=[ 2212], 10.00th=[ 2769], 20.00th=[ 3097], 00:19:51.616 | 30.00th=[ 3163], 40.00th=[ 3195], 50.00th=[ 3195], 60.00th=[ 3228], 00:19:51.616 | 70.00th=[ 3294], 80.00th=[ 3589], 90.00th=[ 3752], 95.00th=[ 3884], 00:19:51.616 | 99.00th=[ 4228], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[ 4817], 00:19:51.616 | 99.99th=[ 4883] 00:19:51.616 bw ( KiB/s): min=17280, max=22752, per=22.76%, avg=19708.44, stdev=1438.64, samples=9 00:19:51.616 iops : min= 2160, max= 2844, avg=2463.56, stdev=179.83, samples=9 00:19:51.616 lat (usec) : 1000=0.12% 00:19:51.616 lat (msec) : 2=4.03%, 4=93.03%, 10=2.82% 00:19:51.616 cpu : usr=95.66%, sys=3.74%, ctx=48, majf=0, minf=0 00:19:51.616 IO depths : 1=0.4%, 2=20.5%, 4=52.6%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:51.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:51.616 issued rwts: total=12267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:51.616 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:51.616 00:19:51.616 Run status group 0 (all jobs): 00:19:51.616 READ: bw=84.6MiB/s (88.7MB/s), 19.2MiB/s-24.8MiB/s (20.1MB/s-26.0MB/s), io=423MiB (444MB), run=5001-5002msec 00:19:51.616 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:51.616 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:19:51.616 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 00:19:51.617 real 0m23.846s 00:19:51.617 user 2m8.663s 00:19:51.617 sys 0m4.172s 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 ************************************ 00:19:51.617 END TEST fio_dif_rand_params 00:19:51.617 ************************************ 00:19:51.617 11:02:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:51.617 11:02:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:51.617 11:02:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 ************************************ 00:19:51.617 START TEST fio_dif_digest 00:19:51.617 ************************************ 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 bdev_null0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:19:51.617 [2024-12-09 11:02:44.373133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:51.617 { 00:19:51.617 "params": { 00:19:51.617 "name": "Nvme$subsystem", 00:19:51.617 "trtype": "$TEST_TRANSPORT", 00:19:51.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.617 "adrfam": "ipv4", 00:19:51.617 "trsvcid": "$NVMF_PORT", 00:19:51.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.617 "hdgst": ${hdgst:-false}, 00:19:51.617 "ddgst": ${ddgst:-false} 00:19:51.617 }, 00:19:51.617 "method": "bdev_nvme_attach_controller" 00:19:51.617 } 00:19:51.617 EOF 00:19:51.617 )") 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:51.617 "params": { 00:19:51.617 "name": "Nvme0", 00:19:51.617 "trtype": "tcp", 00:19:51.617 "traddr": "10.0.0.3", 00:19:51.617 "adrfam": "ipv4", 00:19:51.617 "trsvcid": "4420", 00:19:51.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:51.617 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:51.617 "hdgst": true, 00:19:51.617 "ddgst": true 00:19:51.617 }, 00:19:51.617 "method": "bdev_nvme_attach_controller" 00:19:51.617 }' 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:51.617 11:02:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:51.617 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:51.617 ... 00:19:51.617 fio-3.35 00:19:51.617 Starting 3 threads 00:20:03.833 00:20:03.833 filename0: (groupid=0, jobs=1): err= 0: pid=83697: Mon Dec 9 11:02:55 2024 00:20:03.833 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(362MiB/10010msec) 00:20:03.833 slat (nsec): min=5444, max=64075, avg=13571.86, stdev=9078.11 00:20:03.833 clat (usec): min=4093, max=10812, avg=10348.84, stdev=226.80 00:20:03.833 lat (usec): min=4106, max=10823, avg=10362.41, stdev=226.64 00:20:03.833 clat percentiles (usec): 00:20:03.833 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:20:03.833 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:20:03.833 | 70.00th=[10421], 80.00th=[10421], 90.00th=[10552], 95.00th=[10552], 00:20:03.833 | 99.00th=[10683], 99.50th=[10683], 99.90th=[10814], 99.95th=[10814], 00:20:03.833 | 99.99th=[10814] 00:20:03.833 bw ( KiB/s): min=36864, max=37632, per=33.40%, avg=37025.68, stdev=321.68, samples=19 00:20:03.833 iops : min= 288, max= 294, avg=289.26, stdev= 2.51, samples=19 00:20:03.833 lat (msec) : 10=0.21%, 20=99.79% 00:20:03.833 cpu : usr=96.70%, sys=2.89%, ctx=14, majf=0, minf=0 00:20:03.833 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.833 issued rwts: total=2892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:03.833 filename0: (groupid=0, jobs=1): err= 0: pid=83698: Mon Dec 9 11:02:55 2024 00:20:03.833 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(361MiB/10006msec) 00:20:03.833 slat (nsec): min=5779, max=72917, avg=12722.28, stdev=7182.90 00:20:03.833 clat (usec): min=6714, max=12962, avg=10358.18, stdev=170.67 00:20:03.833 lat (usec): min=6721, max=12993, avg=10370.90, stdev=171.11 00:20:03.833 clat percentiles (usec): 00:20:03.833 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:20:03.833 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:20:03.833 | 70.00th=[10421], 80.00th=[10421], 90.00th=[10552], 95.00th=[10552], 00:20:03.833 | 99.00th=[10683], 99.50th=[10683], 99.90th=[12911], 99.95th=[12911], 00:20:03.833 | 99.99th=[12911] 00:20:03.833 bw ( KiB/s): min=36096, max=37632, per=33.36%, avg=36985.26, stdev=385.12, samples=19 00:20:03.833 iops : min= 282, max= 294, avg=288.95, stdev= 3.01, samples=19 00:20:03.833 lat (msec) : 10=0.10%, 20=99.90% 00:20:03.833 cpu : usr=94.34%, sys=5.26%, ctx=16, majf=0, minf=0 00:20:03.833 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.833 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:03.833 filename0: (groupid=0, jobs=1): err= 0: pid=83699: Mon Dec 9 11:02:55 2024 00:20:03.833 read: IOPS=288, BW=36.1MiB/s (37.8MB/s)(361MiB/10006msec) 00:20:03.833 slat (nsec): min=5433, max=40987, avg=9867.21, stdev=3932.16 00:20:03.833 clat (usec): min=8448, max=11138, avg=10365.92, stdev=113.09 00:20:03.833 lat (usec): min=8455, max=11168, avg=10375.78, stdev=113.50 00:20:03.833 clat percentiles (usec): 00:20:03.833 | 1.00th=[10159], 5.00th=[10290], 10.00th=[10290], 20.00th=[10290], 00:20:03.833 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10290], 60.00th=[10421], 00:20:03.833 | 70.00th=[10421], 80.00th=[10421], 90.00th=[10552], 95.00th=[10552], 00:20:03.833 | 99.00th=[10683], 99.50th=[10683], 99.90th=[11076], 99.95th=[11076], 00:20:03.833 | 99.99th=[11076] 00:20:03.833 bw ( KiB/s): min=36096, max=37632, per=33.36%, avg=36985.26, stdev=385.12, samples=19 00:20:03.833 iops : min= 282, max= 294, avg=288.95, stdev= 3.01, samples=19 00:20:03.833 lat (msec) : 10=0.10%, 20=99.90% 00:20:03.833 cpu : usr=96.95%, sys=2.63%, ctx=79, majf=0, minf=0 00:20:03.833 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:03.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:03.833 issued rwts: total=2889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:03.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:03.833 00:20:03.833 Run status group 0 (all jobs): 00:20:03.833 READ: bw=108MiB/s (114MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.9MB/s), io=1084MiB (1136MB), run=10006-10010msec 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.833 00:20:03.833 real 0m11.245s 00:20:03.833 user 0m29.654s 00:20:03.833 sys 0m1.464s 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.833 11:02:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:03.833 ************************************ 00:20:03.833 END TEST fio_dif_digest 00:20:03.833 ************************************ 00:20:03.833 11:02:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:03.833 11:02:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.833 rmmod nvme_tcp 00:20:03.833 rmmod nvme_fabrics 00:20:03.833 rmmod nvme_keyring 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82922 ']' 00:20:03.833 11:02:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82922 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 82922 ']' 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 82922 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82922 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.833 killing process with pid 82922 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82922' 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@973 -- # kill 82922 00:20:03.833 11:02:55 nvmf_dif -- common/autotest_common.sh@978 -- # wait 82922 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:03.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.834 Waiting for block devices as requested 00:20:03.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:03.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:03.834 11:02:56 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.093 11:02:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:04.093 11:02:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.093 11:02:57 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:04.093 00:20:04.093 real 1m1.594s 00:20:04.093 user 3m55.442s 00:20:04.093 sys 0m15.340s 00:20:04.093 11:02:57 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.093 11:02:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:04.093 ************************************ 00:20:04.093 END TEST nvmf_dif 00:20:04.093 ************************************ 00:20:04.352 11:02:57 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:04.352 11:02:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:04.352 11:02:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.352 11:02:57 -- common/autotest_common.sh@10 -- # set +x 00:20:04.352 ************************************ 00:20:04.352 START TEST nvmf_abort_qd_sizes 00:20:04.352 ************************************ 00:20:04.352 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:04.352 * Looking for test storage... 00:20:04.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:04.352 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:04.352 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:20:04.352 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:04.612 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.613 --rc genhtml_branch_coverage=1 00:20:04.613 --rc genhtml_function_coverage=1 00:20:04.613 --rc genhtml_legend=1 00:20:04.613 --rc geninfo_all_blocks=1 00:20:04.613 --rc geninfo_unexecuted_blocks=1 00:20:04.613 00:20:04.613 ' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.613 --rc genhtml_branch_coverage=1 00:20:04.613 --rc genhtml_function_coverage=1 00:20:04.613 --rc genhtml_legend=1 00:20:04.613 --rc geninfo_all_blocks=1 00:20:04.613 --rc geninfo_unexecuted_blocks=1 00:20:04.613 00:20:04.613 ' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.613 --rc genhtml_branch_coverage=1 00:20:04.613 --rc genhtml_function_coverage=1 00:20:04.613 --rc genhtml_legend=1 00:20:04.613 --rc geninfo_all_blocks=1 00:20:04.613 --rc geninfo_unexecuted_blocks=1 00:20:04.613 00:20:04.613 ' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:04.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.613 --rc genhtml_branch_coverage=1 00:20:04.613 --rc genhtml_function_coverage=1 00:20:04.613 --rc genhtml_legend=1 00:20:04.613 --rc geninfo_all_blocks=1 00:20:04.613 --rc geninfo_unexecuted_blocks=1 00:20:04.613 00:20:04.613 ' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:04.613 Cannot find device "nvmf_init_br" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:04.613 Cannot find device "nvmf_init_br2" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:04.613 Cannot find device "nvmf_tgt_br" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.613 Cannot find device "nvmf_tgt_br2" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:04.613 Cannot find device "nvmf_init_br" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:04.613 Cannot find device "nvmf_init_br2" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:04.613 Cannot find device "nvmf_tgt_br" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:04.613 Cannot find device "nvmf_tgt_br2" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:04.613 Cannot find device "nvmf_br" 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:04.613 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:04.874 Cannot find device "nvmf_init_if" 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:04.874 Cannot find device "nvmf_init_if2" 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.874 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:04.874 11:02:57 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:04.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:20:04.874 00:20:04.874 --- 10.0.0.3 ping statistics --- 00:20:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.874 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:04.874 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:04.874 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:20:04.874 00:20:04.874 --- 10.0.0.4 ping statistics --- 00:20:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.874 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:20:04.874 00:20:04.874 --- 10.0.0.1 ping statistics --- 00:20:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.874 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:04.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:20:04.874 00:20:04.874 --- 10.0.0.2 ping statistics --- 00:20:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.874 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:04.874 11:02:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:05.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.814 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:06.073 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:06.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84371 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84371 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84371 ']' 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:06.073 11:02:59 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:06.074 [2024-12-09 11:02:59.173941] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:20:06.074 [2024-12-09 11:02:59.174010] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.333 [2024-12-09 11:02:59.328422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.333 [2024-12-09 11:02:59.393874] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.333 [2024-12-09 11:02:59.393933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.333 [2024-12-09 11:02:59.393941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.333 [2024-12-09 11:02:59.393947] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.333 [2024-12-09 11:02:59.393951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.333 [2024-12-09 11:02:59.395272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.333 [2024-12-09 11:02:59.395458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.333 [2024-12-09 11:02:59.395566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.333 [2024-12-09 11:02:59.395580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.333 [2024-12-09 11:02:59.472790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:06.927 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.187 11:03:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 ************************************ 00:20:07.187 START TEST spdk_target_abort 00:20:07.187 ************************************ 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 spdk_targetn1 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 [2024-12-09 11:03:00.207508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:07.187 [2024-12-09 11:03:00.254158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.187 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:07.188 11:03:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:10.477 Initializing NVMe Controllers 00:20:10.477 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:10.477 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:10.477 Initialization complete. Launching workers. 00:20:10.477 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12309, failed: 0 00:20:10.477 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1066, failed to submit 11243 00:20:10.477 success 808, unsuccessful 258, failed 0 00:20:10.477 11:03:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:10.477 11:03:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:14.671 Initializing NVMe Controllers 00:20:14.671 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:14.671 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:14.671 Initialization complete. Launching workers. 00:20:14.671 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9000, failed: 0 00:20:14.671 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 7788 00:20:14.671 success 340, unsuccessful 872, failed 0 00:20:14.671 11:03:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:14.671 11:03:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:17.205 Initializing NVMe Controllers 00:20:17.205 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:17.205 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:17.205 Initialization complete. Launching workers. 00:20:17.205 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34961, failed: 0 00:20:17.205 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2314, failed to submit 32647 00:20:17.205 success 584, unsuccessful 1730, failed 0 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.205 11:03:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84371 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84371 ']' 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84371 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84371 00:20:19.105 killing process with pid 84371 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84371' 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84371 00:20:19.105 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84371 00:20:19.365 00:20:19.365 real 0m12.222s 00:20:19.365 user 0m49.589s 00:20:19.365 sys 0m1.856s 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:19.365 ************************************ 00:20:19.365 END TEST spdk_target_abort 00:20:19.365 ************************************ 00:20:19.365 11:03:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:19.365 11:03:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:19.365 11:03:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.365 11:03:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:19.365 ************************************ 00:20:19.365 START TEST kernel_target_abort 00:20:19.365 ************************************ 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:19.365 11:03:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:19.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:19.934 Waiting for block devices as requested 00:20:19.934 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.194 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:20.194 No valid GPT data, bailing 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:20.194 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:20.454 No valid GPT data, bailing 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:20.454 No valid GPT data, bailing 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:20.454 No valid GPT data, bailing 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:20.454 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 --hostid=0813c78c-bf40-477e-b94d-3900e5d9beb7 -a 10.0.0.1 -t tcp -s 4420 00:20:20.714 00:20:20.714 Discovery Log Number of Records 2, Generation counter 2 00:20:20.714 =====Discovery Log Entry 0====== 00:20:20.714 trtype: tcp 00:20:20.714 adrfam: ipv4 00:20:20.714 subtype: current discovery subsystem 00:20:20.714 treq: not specified, sq flow control disable supported 00:20:20.714 portid: 1 00:20:20.714 trsvcid: 4420 00:20:20.714 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:20.714 traddr: 10.0.0.1 00:20:20.714 eflags: none 00:20:20.714 sectype: none 00:20:20.714 =====Discovery Log Entry 1====== 00:20:20.714 trtype: tcp 00:20:20.714 adrfam: ipv4 00:20:20.714 subtype: nvme subsystem 00:20:20.714 treq: not specified, sq flow control disable supported 00:20:20.714 portid: 1 00:20:20.714 trsvcid: 4420 00:20:20.714 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:20.714 traddr: 10.0.0.1 00:20:20.714 eflags: none 00:20:20.714 sectype: none 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:20.714 11:03:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:24.006 Initializing NVMe Controllers 00:20:24.006 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:24.006 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:24.006 Initialization complete. Launching workers. 00:20:24.006 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38558, failed: 0 00:20:24.006 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38558, failed to submit 0 00:20:24.006 success 0, unsuccessful 38558, failed 0 00:20:24.006 11:03:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:24.006 11:03:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:27.299 Initializing NVMe Controllers 00:20:27.299 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:27.299 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:27.299 Initialization complete. Launching workers. 00:20:27.299 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80184, failed: 0 00:20:27.299 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37881, failed to submit 42303 00:20:27.299 success 0, unsuccessful 37881, failed 0 00:20:27.299 11:03:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:27.299 11:03:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:30.593 Initializing NVMe Controllers 00:20:30.593 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:30.593 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:30.593 Initialization complete. Launching workers. 00:20:30.593 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 104222, failed: 0 00:20:30.593 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26058, failed to submit 78164 00:20:30.593 success 0, unsuccessful 26058, failed 0 00:20:30.593 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:30.593 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:30.593 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:20:30.593 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:30.594 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:30.594 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:30.594 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:30.594 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:30.594 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:30.594 11:03:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:31.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.752 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.752 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.752 ************************************ 00:20:37.752 END TEST kernel_target_abort 00:20:37.752 ************************************ 00:20:37.752 00:20:37.752 real 0m18.348s 00:20:37.752 user 0m7.363s 00:20:37.752 sys 0m8.838s 00:20:37.752 11:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:37.752 11:03:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.752 rmmod nvme_tcp 00:20:37.752 rmmod nvme_fabrics 00:20:37.752 rmmod nvme_keyring 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84371 ']' 00:20:37.752 Process with pid 84371 is not found 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84371 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84371 ']' 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84371 00:20:37.752 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84371) - No such process 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84371 is not found' 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:37.752 11:03:30 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:38.321 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:38.321 Waiting for block devices as requested 00:20:38.582 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.582 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.582 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:20:38.843 00:20:38.843 real 0m34.636s 00:20:38.843 user 0m58.279s 00:20:38.843 sys 0m12.731s 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.843 11:03:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:38.843 ************************************ 00:20:38.843 END TEST nvmf_abort_qd_sizes 00:20:38.843 ************************************ 00:20:38.843 11:03:32 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:38.843 11:03:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:38.843 11:03:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.843 11:03:32 -- common/autotest_common.sh@10 -- # set +x 00:20:39.103 ************************************ 00:20:39.103 START TEST keyring_file 00:20:39.103 ************************************ 00:20:39.103 11:03:32 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:39.103 * Looking for test storage... 00:20:39.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:39.103 11:03:32 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:39.103 11:03:32 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:20:39.103 11:03:32 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:39.103 11:03:32 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:39.103 11:03:32 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.103 11:03:32 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.103 11:03:32 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.103 11:03:32 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@345 -- # : 1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@353 -- # local d=1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@355 -- # echo 1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@353 -- # local d=2 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@355 -- # echo 2 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@368 -- # return 0 00:20:39.104 11:03:32 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.104 11:03:32 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:39.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.104 --rc genhtml_branch_coverage=1 00:20:39.104 --rc genhtml_function_coverage=1 00:20:39.104 --rc genhtml_legend=1 00:20:39.104 --rc geninfo_all_blocks=1 00:20:39.104 --rc geninfo_unexecuted_blocks=1 00:20:39.104 00:20:39.104 ' 00:20:39.104 11:03:32 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:39.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.104 --rc genhtml_branch_coverage=1 00:20:39.104 --rc genhtml_function_coverage=1 00:20:39.104 --rc genhtml_legend=1 00:20:39.104 --rc geninfo_all_blocks=1 00:20:39.104 --rc geninfo_unexecuted_blocks=1 00:20:39.104 00:20:39.104 ' 00:20:39.104 11:03:32 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:39.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.104 --rc genhtml_branch_coverage=1 00:20:39.104 --rc genhtml_function_coverage=1 00:20:39.104 --rc genhtml_legend=1 00:20:39.104 --rc geninfo_all_blocks=1 00:20:39.104 --rc geninfo_unexecuted_blocks=1 00:20:39.104 00:20:39.104 ' 00:20:39.104 11:03:32 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:39.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.104 --rc genhtml_branch_coverage=1 00:20:39.104 --rc genhtml_function_coverage=1 00:20:39.104 --rc genhtml_legend=1 00:20:39.104 --rc geninfo_all_blocks=1 00:20:39.104 --rc geninfo_unexecuted_blocks=1 00:20:39.104 00:20:39.104 ' 00:20:39.104 11:03:32 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:39.104 11:03:32 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.104 11:03:32 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.104 11:03:32 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.364 11:03:32 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.364 11:03:32 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.364 11:03:32 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.364 11:03:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.364 11:03:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.365 11:03:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.365 11:03:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:39.365 11:03:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@51 -- # : 0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.365 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lBTT77iIZk 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lBTT77iIZk 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lBTT77iIZk 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.lBTT77iIZk 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uNCI2KqsRi 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:39.365 11:03:32 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uNCI2KqsRi 00:20:39.365 11:03:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uNCI2KqsRi 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uNCI2KqsRi 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=85314 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:39.365 11:03:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85314 00:20:39.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.365 11:03:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85314 ']' 00:20:39.365 11:03:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.365 11:03:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.365 11:03:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.365 11:03:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.365 11:03:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:39.365 [2024-12-09 11:03:32.494018] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:20:39.365 [2024-12-09 11:03:32.494095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85314 ] 00:20:39.625 [2024-12-09 11:03:32.642677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.625 [2024-12-09 11:03:32.686419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.625 [2024-12-09 11:03:32.742412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:40.196 11:03:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.196 11:03:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:40.196 11:03:33 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:40.196 11:03:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.196 11:03:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:40.196 [2024-12-09 11:03:33.330085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.196 null0 00:20:40.196 [2024-12-09 11:03:33.362002] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.196 [2024-12-09 11:03:33.362201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.456 11:03:33 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:40.456 [2024-12-09 11:03:33.393929] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:40.456 request: 00:20:40.456 { 00:20:40.456 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:40.456 "secure_channel": false, 00:20:40.456 "listen_address": { 00:20:40.456 "trtype": "tcp", 00:20:40.456 "traddr": "127.0.0.1", 00:20:40.456 "trsvcid": "4420" 00:20:40.456 }, 00:20:40.456 "method": "nvmf_subsystem_add_listener", 00:20:40.456 "req_id": 1 00:20:40.456 } 00:20:40.456 Got JSON-RPC error response 00:20:40.456 response: 00:20:40.456 { 00:20:40.456 "code": -32602, 00:20:40.456 "message": "Invalid parameters" 00:20:40.456 } 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.456 11:03:33 keyring_file -- keyring/file.sh@47 -- # bperfpid=85331 00:20:40.456 11:03:33 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:40.456 11:03:33 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85331 /var/tmp/bperf.sock 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85331 ']' 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:40.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.456 11:03:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:40.456 [2024-12-09 11:03:33.452683] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:20:40.456 [2024-12-09 11:03:33.452755] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85331 ] 00:20:40.456 [2024-12-09 11:03:33.600905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.716 [2024-12-09 11:03:33.664303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.716 [2024-12-09 11:03:33.737112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.285 11:03:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.285 11:03:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:41.285 11:03:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:41.285 11:03:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:41.545 11:03:34 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uNCI2KqsRi 00:20:41.545 11:03:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uNCI2KqsRi 00:20:41.545 11:03:34 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:20:41.545 11:03:34 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:41.545 11:03:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:41.545 11:03:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:41.546 11:03:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:41.805 11:03:34 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.lBTT77iIZk == \/\t\m\p\/\t\m\p\.\l\B\T\T\7\7\i\I\Z\k ]] 00:20:41.805 11:03:34 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:20:41.805 11:03:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:41.805 11:03:34 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:20:41.805 11:03:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:41.805 11:03:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:42.064 11:03:35 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uNCI2KqsRi == \/\t\m\p\/\t\m\p\.\u\N\C\I\2\K\q\s\R\i ]] 00:20:42.064 11:03:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:20:42.064 11:03:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:42.064 11:03:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:42.064 11:03:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:42.064 11:03:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:42.064 11:03:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:42.323 11:03:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:42.323 11:03:35 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:20:42.323 11:03:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:42.323 11:03:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:42.323 11:03:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:42.323 11:03:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:42.323 11:03:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:42.323 11:03:35 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:20:42.323 11:03:35 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:42.323 11:03:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:42.583 [2024-12-09 11:03:35.658575] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.583 nvme0n1 00:20:42.583 11:03:35 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:20:42.583 11:03:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:42.583 11:03:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:42.583 11:03:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:42.583 11:03:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:42.583 11:03:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:42.843 11:03:35 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:20:42.843 11:03:35 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:20:42.843 11:03:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:42.843 11:03:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:42.843 11:03:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:42.843 11:03:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:42.843 11:03:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:43.103 11:03:36 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:20:43.103 11:03:36 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:43.103 Running I/O for 1 seconds... 00:20:44.485 18060.00 IOPS, 70.55 MiB/s 00:20:44.485 Latency(us) 00:20:44.485 [2024-12-09T11:03:37.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.485 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:20:44.485 nvme0n1 : 1.00 18103.37 70.72 0.00 0.00 7056.69 3920.71 11790.76 00:20:44.485 [2024-12-09T11:03:37.664Z] =================================================================================================================== 00:20:44.485 [2024-12-09T11:03:37.664Z] Total : 18103.37 70.72 0.00 0.00 7056.69 3920.71 11790.76 00:20:44.485 { 00:20:44.485 "results": [ 00:20:44.485 { 00:20:44.485 "job": "nvme0n1", 00:20:44.485 "core_mask": "0x2", 00:20:44.485 "workload": "randrw", 00:20:44.485 "percentage": 50, 00:20:44.485 "status": "finished", 00:20:44.485 "queue_depth": 128, 00:20:44.485 "io_size": 4096, 00:20:44.485 "runtime": 1.004675, 00:20:44.485 "iops": 18103.366760395154, 00:20:44.485 "mibps": 70.71627640779357, 00:20:44.485 "io_failed": 0, 00:20:44.485 "io_timeout": 0, 00:20:44.485 "avg_latency_us": 7056.69305679737, 00:20:44.485 "min_latency_us": 3920.7126637554584, 00:20:44.485 "max_latency_us": 11790.756331877728 00:20:44.485 } 00:20:44.485 ], 00:20:44.485 "core_count": 1 00:20:44.485 } 00:20:44.485 11:03:37 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:44.485 11:03:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:44.485 11:03:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:20:44.485 11:03:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:44.485 11:03:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:44.485 11:03:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:44.485 11:03:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:44.485 11:03:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:44.745 11:03:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:20:44.745 11:03:37 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:20:44.745 11:03:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:44.745 11:03:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:44.745 11:03:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:44.745 11:03:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:44.745 11:03:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:44.745 11:03:37 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:20:44.745 11:03:37 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:44.745 11:03:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:44.745 11:03:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:20:45.005 [2024-12-09 11:03:38.045906] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:45.005 [2024-12-09 11:03:38.046116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf15d0 (107): Transport endpoint is not connected 00:20:45.005 [2024-12-09 11:03:38.047106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf15d0 (9): Bad file descriptor 00:20:45.005 [2024-12-09 11:03:38.048103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:45.005 [2024-12-09 11:03:38.048121] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:45.005 [2024-12-09 11:03:38.048127] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:45.005 [2024-12-09 11:03:38.048134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:45.005 request: 00:20:45.005 { 00:20:45.005 "name": "nvme0", 00:20:45.005 "trtype": "tcp", 00:20:45.005 "traddr": "127.0.0.1", 00:20:45.005 "adrfam": "ipv4", 00:20:45.005 "trsvcid": "4420", 00:20:45.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:45.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:45.005 "prchk_reftag": false, 00:20:45.005 "prchk_guard": false, 00:20:45.005 "hdgst": false, 00:20:45.005 "ddgst": false, 00:20:45.005 "psk": "key1", 00:20:45.005 "allow_unrecognized_csi": false, 00:20:45.005 "method": "bdev_nvme_attach_controller", 00:20:45.005 "req_id": 1 00:20:45.005 } 00:20:45.005 Got JSON-RPC error response 00:20:45.005 response: 00:20:45.005 { 00:20:45.005 "code": -5, 00:20:45.005 "message": "Input/output error" 00:20:45.005 } 00:20:45.005 11:03:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:45.005 11:03:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.005 11:03:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.005 11:03:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.005 11:03:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:20:45.005 11:03:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:45.005 11:03:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:45.005 11:03:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:45.005 11:03:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:45.005 11:03:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:45.266 11:03:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:20:45.266 11:03:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:20:45.266 11:03:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:45.266 11:03:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:45.266 11:03:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:45.266 11:03:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:45.266 11:03:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:45.525 11:03:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:20:45.525 11:03:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:20:45.525 11:03:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:45.525 11:03:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:20:45.526 11:03:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:20:45.785 11:03:38 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:20:45.785 11:03:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:45.785 11:03:38 keyring_file -- keyring/file.sh@78 -- # jq length 00:20:46.044 11:03:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:20:46.044 11:03:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.lBTT77iIZk 00:20:46.044 11:03:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.044 11:03:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:46.044 11:03:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:46.304 [2024-12-09 11:03:39.237425] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lBTT77iIZk': 0100660 00:20:46.304 [2024-12-09 11:03:39.237452] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:46.304 request: 00:20:46.304 { 00:20:46.304 "name": "key0", 00:20:46.304 "path": "/tmp/tmp.lBTT77iIZk", 00:20:46.304 "method": "keyring_file_add_key", 00:20:46.304 "req_id": 1 00:20:46.304 } 00:20:46.304 Got JSON-RPC error response 00:20:46.304 response: 00:20:46.304 { 00:20:46.304 "code": -1, 00:20:46.304 "message": "Operation not permitted" 00:20:46.304 } 00:20:46.304 11:03:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:46.304 11:03:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.304 11:03:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.304 11:03:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.304 11:03:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.lBTT77iIZk 00:20:46.304 11:03:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:46.304 11:03:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.lBTT77iIZk 00:20:46.304 11:03:39 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.lBTT77iIZk 00:20:46.304 11:03:39 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:20:46.304 11:03:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:46.304 11:03:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:46.304 11:03:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:46.304 11:03:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:46.304 11:03:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:46.563 11:03:39 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:20:46.563 11:03:39 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:46.563 11:03:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:46.564 11:03:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:46.843 [2024-12-09 11:03:39.832400] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.lBTT77iIZk': No such file or directory 00:20:46.843 [2024-12-09 11:03:39.832481] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:20:46.843 [2024-12-09 11:03:39.832519] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:20:46.843 [2024-12-09 11:03:39.832536] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:20:46.843 [2024-12-09 11:03:39.832560] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:46.843 [2024-12-09 11:03:39.832576] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:20:46.843 request: 00:20:46.843 { 00:20:46.843 "name": "nvme0", 00:20:46.843 "trtype": "tcp", 00:20:46.843 "traddr": "127.0.0.1", 00:20:46.843 "adrfam": "ipv4", 00:20:46.843 "trsvcid": "4420", 00:20:46.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:46.843 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:46.843 "prchk_reftag": false, 00:20:46.843 "prchk_guard": false, 00:20:46.843 "hdgst": false, 00:20:46.843 "ddgst": false, 00:20:46.843 "psk": "key0", 00:20:46.843 "allow_unrecognized_csi": false, 00:20:46.843 "method": "bdev_nvme_attach_controller", 00:20:46.843 "req_id": 1 00:20:46.843 } 00:20:46.843 Got JSON-RPC error response 00:20:46.843 response: 00:20:46.843 { 00:20:46.843 "code": -19, 00:20:46.843 "message": "No such device" 00:20:46.843 } 00:20:46.843 11:03:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:20:46.843 11:03:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:46.843 11:03:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:46.843 11:03:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:46.843 11:03:39 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:20:46.844 11:03:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:47.136 11:03:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n9hPtAf2ZG 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:47.136 11:03:40 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:47.136 11:03:40 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:47.136 11:03:40 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:47.136 11:03:40 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:47.136 11:03:40 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:47.136 11:03:40 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n9hPtAf2ZG 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n9hPtAf2ZG 00:20:47.136 11:03:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.n9hPtAf2ZG 00:20:47.136 11:03:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n9hPtAf2ZG 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n9hPtAf2ZG 00:20:47.136 11:03:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:47.136 11:03:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:47.406 nvme0n1 00:20:47.406 11:03:40 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:20:47.406 11:03:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:47.406 11:03:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:47.406 11:03:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:47.406 11:03:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:47.406 11:03:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:47.675 11:03:40 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:20:47.675 11:03:40 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:20:47.675 11:03:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:20:47.956 11:03:40 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:20:47.956 11:03:40 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:20:47.956 11:03:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:47.956 11:03:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:47.956 11:03:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:48.229 11:03:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:20:48.229 11:03:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:20:48.229 11:03:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:48.229 11:03:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:48.229 11:03:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:48.229 11:03:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:48.229 11:03:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:48.229 11:03:41 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:20:48.229 11:03:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:48.229 11:03:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:48.489 11:03:41 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:20:48.489 11:03:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:48.489 11:03:41 keyring_file -- keyring/file.sh@105 -- # jq length 00:20:48.748 11:03:41 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:20:48.748 11:03:41 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n9hPtAf2ZG 00:20:48.748 11:03:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n9hPtAf2ZG 00:20:49.008 11:03:41 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uNCI2KqsRi 00:20:49.008 11:03:41 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uNCI2KqsRi 00:20:49.008 11:03:42 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:49.008 11:03:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:49.268 nvme0n1 00:20:49.268 11:03:42 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:20:49.268 11:03:42 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:20:49.528 11:03:42 keyring_file -- keyring/file.sh@113 -- # config='{ 00:20:49.528 "subsystems": [ 00:20:49.528 { 00:20:49.528 "subsystem": "keyring", 00:20:49.528 "config": [ 00:20:49.528 { 00:20:49.528 "method": "keyring_file_add_key", 00:20:49.528 "params": { 00:20:49.528 "name": "key0", 00:20:49.528 "path": "/tmp/tmp.n9hPtAf2ZG" 00:20:49.528 } 00:20:49.528 }, 00:20:49.528 { 00:20:49.528 "method": "keyring_file_add_key", 00:20:49.528 "params": { 00:20:49.528 "name": "key1", 00:20:49.528 "path": "/tmp/tmp.uNCI2KqsRi" 00:20:49.528 } 00:20:49.528 } 00:20:49.528 ] 00:20:49.528 }, 00:20:49.528 { 00:20:49.528 "subsystem": "iobuf", 00:20:49.528 "config": [ 00:20:49.528 { 00:20:49.528 "method": "iobuf_set_options", 00:20:49.528 "params": { 00:20:49.528 "small_pool_count": 8192, 00:20:49.528 "large_pool_count": 1024, 00:20:49.528 "small_bufsize": 8192, 00:20:49.528 "large_bufsize": 135168, 00:20:49.528 "enable_numa": false 00:20:49.528 } 00:20:49.528 } 00:20:49.528 ] 00:20:49.528 }, 00:20:49.528 { 00:20:49.528 "subsystem": "sock", 00:20:49.528 "config": [ 00:20:49.528 { 00:20:49.528 "method": "sock_set_default_impl", 00:20:49.528 "params": { 00:20:49.528 "impl_name": "uring" 00:20:49.528 } 00:20:49.528 }, 00:20:49.528 { 00:20:49.528 "method": "sock_impl_set_options", 00:20:49.528 "params": { 00:20:49.528 "impl_name": "ssl", 00:20:49.528 "recv_buf_size": 4096, 00:20:49.528 "send_buf_size": 4096, 00:20:49.528 "enable_recv_pipe": true, 00:20:49.528 "enable_quickack": false, 00:20:49.528 "enable_placement_id": 0, 00:20:49.528 "enable_zerocopy_send_server": true, 00:20:49.528 "enable_zerocopy_send_client": false, 00:20:49.528 "zerocopy_threshold": 0, 00:20:49.528 "tls_version": 0, 00:20:49.528 "enable_ktls": false 00:20:49.528 } 00:20:49.528 }, 00:20:49.528 { 00:20:49.528 "method": "sock_impl_set_options", 00:20:49.528 "params": { 00:20:49.528 "impl_name": "posix", 00:20:49.528 "recv_buf_size": 2097152, 00:20:49.528 "send_buf_size": 2097152, 00:20:49.528 "enable_recv_pipe": true, 00:20:49.528 "enable_quickack": false, 00:20:49.528 "enable_placement_id": 0, 00:20:49.528 "enable_zerocopy_send_server": true, 00:20:49.528 "enable_zerocopy_send_client": false, 00:20:49.528 "zerocopy_threshold": 0, 00:20:49.528 "tls_version": 0, 00:20:49.528 "enable_ktls": false 00:20:49.528 } 00:20:49.528 }, 00:20:49.528 { 00:20:49.528 "method": "sock_impl_set_options", 00:20:49.528 "params": { 00:20:49.528 "impl_name": "uring", 00:20:49.528 "recv_buf_size": 2097152, 00:20:49.528 "send_buf_size": 2097152, 00:20:49.528 "enable_recv_pipe": true, 00:20:49.528 "enable_quickack": false, 00:20:49.528 "enable_placement_id": 0, 00:20:49.528 "enable_zerocopy_send_server": false, 00:20:49.528 "enable_zerocopy_send_client": false, 00:20:49.528 "zerocopy_threshold": 0, 00:20:49.529 "tls_version": 0, 00:20:49.529 "enable_ktls": false 00:20:49.529 } 00:20:49.529 } 00:20:49.529 ] 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "subsystem": "vmd", 00:20:49.529 "config": [] 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "subsystem": "accel", 00:20:49.529 "config": [ 00:20:49.529 { 00:20:49.529 "method": "accel_set_options", 00:20:49.529 "params": { 00:20:49.529 "small_cache_size": 128, 00:20:49.529 "large_cache_size": 16, 00:20:49.529 "task_count": 2048, 00:20:49.529 "sequence_count": 2048, 00:20:49.529 "buf_count": 2048 00:20:49.529 } 00:20:49.529 } 00:20:49.529 ] 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "subsystem": "bdev", 00:20:49.529 "config": [ 00:20:49.529 { 00:20:49.529 "method": "bdev_set_options", 00:20:49.529 "params": { 00:20:49.529 "bdev_io_pool_size": 65535, 00:20:49.529 "bdev_io_cache_size": 256, 00:20:49.529 "bdev_auto_examine": true, 00:20:49.529 "iobuf_small_cache_size": 128, 00:20:49.529 "iobuf_large_cache_size": 16 00:20:49.529 } 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "method": "bdev_raid_set_options", 00:20:49.529 "params": { 00:20:49.529 "process_window_size_kb": 1024, 00:20:49.529 "process_max_bandwidth_mb_sec": 0 00:20:49.529 } 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "method": "bdev_iscsi_set_options", 00:20:49.529 "params": { 00:20:49.529 "timeout_sec": 30 00:20:49.529 } 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "method": "bdev_nvme_set_options", 00:20:49.529 "params": { 00:20:49.529 "action_on_timeout": "none", 00:20:49.529 "timeout_us": 0, 00:20:49.529 "timeout_admin_us": 0, 00:20:49.529 "keep_alive_timeout_ms": 10000, 00:20:49.529 "arbitration_burst": 0, 00:20:49.529 "low_priority_weight": 0, 00:20:49.529 "medium_priority_weight": 0, 00:20:49.529 "high_priority_weight": 0, 00:20:49.529 "nvme_adminq_poll_period_us": 10000, 00:20:49.529 "nvme_ioq_poll_period_us": 0, 00:20:49.529 "io_queue_requests": 512, 00:20:49.529 "delay_cmd_submit": true, 00:20:49.529 "transport_retry_count": 4, 00:20:49.529 "bdev_retry_count": 3, 00:20:49.529 "transport_ack_timeout": 0, 00:20:49.529 "ctrlr_loss_timeout_sec": 0, 00:20:49.529 "reconnect_delay_sec": 0, 00:20:49.529 "fast_io_fail_timeout_sec": 0, 00:20:49.529 "disable_auto_failback": false, 00:20:49.529 "generate_uuids": false, 00:20:49.529 "transport_tos": 0, 00:20:49.529 "nvme_error_stat": false, 00:20:49.529 "rdma_srq_size": 0, 00:20:49.529 "io_path_stat": false, 00:20:49.529 "allow_accel_sequence": false, 00:20:49.529 "rdma_max_cq_size": 0, 00:20:49.529 "rdma_cm_event_timeout_ms": 0, 00:20:49.529 "dhchap_digests": [ 00:20:49.529 "sha256", 00:20:49.529 "sha384", 00:20:49.529 "sha512" 00:20:49.529 ], 00:20:49.529 "dhchap_dhgroups": [ 00:20:49.529 "null", 00:20:49.529 "ffdhe2048", 00:20:49.529 "ffdhe3072", 00:20:49.529 "ffdhe4096", 00:20:49.529 "ffdhe6144", 00:20:49.529 "ffdhe8192" 00:20:49.529 ] 00:20:49.529 } 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "method": "bdev_nvme_attach_controller", 00:20:49.529 "params": { 00:20:49.529 "name": "nvme0", 00:20:49.529 "trtype": "TCP", 00:20:49.529 "adrfam": "IPv4", 00:20:49.529 "traddr": "127.0.0.1", 00:20:49.529 "trsvcid": "4420", 00:20:49.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:49.529 "prchk_reftag": false, 00:20:49.529 "prchk_guard": false, 00:20:49.529 "ctrlr_loss_timeout_sec": 0, 00:20:49.529 "reconnect_delay_sec": 0, 00:20:49.529 "fast_io_fail_timeout_sec": 0, 00:20:49.529 "psk": "key0", 00:20:49.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:49.529 "hdgst": false, 00:20:49.529 "ddgst": false, 00:20:49.529 "multipath": "multipath" 00:20:49.529 } 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "method": "bdev_nvme_set_hotplug", 00:20:49.529 "params": { 00:20:49.529 "period_us": 100000, 00:20:49.529 "enable": false 00:20:49.529 } 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "method": "bdev_wait_for_examine" 00:20:49.529 } 00:20:49.529 ] 00:20:49.529 }, 00:20:49.529 { 00:20:49.529 "subsystem": "nbd", 00:20:49.529 "config": [] 00:20:49.529 } 00:20:49.529 ] 00:20:49.529 }' 00:20:49.529 11:03:42 keyring_file -- keyring/file.sh@115 -- # killprocess 85331 00:20:49.529 11:03:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85331 ']' 00:20:49.529 11:03:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85331 00:20:49.529 11:03:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:49.529 11:03:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.529 11:03:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85331 00:20:49.789 killing process with pid 85331 00:20:49.789 Received shutdown signal, test time was about 1.000000 seconds 00:20:49.789 00:20:49.789 Latency(us) 00:20:49.789 [2024-12-09T11:03:42.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.789 [2024-12-09T11:03:42.968Z] =================================================================================================================== 00:20:49.789 [2024-12-09T11:03:42.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:49.789 11:03:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:49.789 11:03:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:49.789 11:03:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85331' 00:20:49.789 11:03:42 keyring_file -- common/autotest_common.sh@973 -- # kill 85331 00:20:49.789 11:03:42 keyring_file -- common/autotest_common.sh@978 -- # wait 85331 00:20:50.049 11:03:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=85566 00:20:50.049 11:03:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85566 /var/tmp/bperf.sock 00:20:50.049 11:03:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85566 ']' 00:20:50.049 11:03:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:50.049 11:03:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:20:50.049 "subsystems": [ 00:20:50.049 { 00:20:50.049 "subsystem": "keyring", 00:20:50.049 "config": [ 00:20:50.049 { 00:20:50.049 "method": "keyring_file_add_key", 00:20:50.049 "params": { 00:20:50.049 "name": "key0", 00:20:50.049 "path": "/tmp/tmp.n9hPtAf2ZG" 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "keyring_file_add_key", 00:20:50.049 "params": { 00:20:50.049 "name": "key1", 00:20:50.049 "path": "/tmp/tmp.uNCI2KqsRi" 00:20:50.049 } 00:20:50.049 } 00:20:50.049 ] 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "subsystem": "iobuf", 00:20:50.049 "config": [ 00:20:50.049 { 00:20:50.049 "method": "iobuf_set_options", 00:20:50.049 "params": { 00:20:50.049 "small_pool_count": 8192, 00:20:50.049 "large_pool_count": 1024, 00:20:50.049 "small_bufsize": 8192, 00:20:50.049 "large_bufsize": 135168, 00:20:50.049 "enable_numa": false 00:20:50.049 } 00:20:50.049 } 00:20:50.049 ] 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "subsystem": "sock", 00:20:50.049 "config": [ 00:20:50.049 { 00:20:50.049 "method": "sock_set_default_impl", 00:20:50.049 "params": { 00:20:50.049 "impl_name": "uring" 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "sock_impl_set_options", 00:20:50.049 "params": { 00:20:50.049 "impl_name": "ssl", 00:20:50.049 "recv_buf_size": 4096, 00:20:50.049 "send_buf_size": 4096, 00:20:50.049 "enable_recv_pipe": true, 00:20:50.049 "enable_quickack": false, 00:20:50.049 "enable_placement_id": 0, 00:20:50.049 "enable_zerocopy_send_server": true, 00:20:50.049 "enable_zerocopy_send_client": false, 00:20:50.049 "zerocopy_threshold": 0, 00:20:50.049 "tls_version": 0, 00:20:50.049 "enable_ktls": false 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "sock_impl_set_options", 00:20:50.049 "params": { 00:20:50.049 "impl_name": "posix", 00:20:50.049 "recv_buf_size": 2097152, 00:20:50.049 "send_buf_size": 2097152, 00:20:50.049 "enable_recv_pipe": true, 00:20:50.049 "enable_quickack": false, 00:20:50.049 "enable_placement_id": 0, 00:20:50.049 "enable_zerocopy_send_server": true, 00:20:50.049 "enable_zerocopy_send_client": false, 00:20:50.049 "zerocopy_threshold": 0, 00:20:50.049 "tls_version": 0, 00:20:50.049 "enable_ktls": false 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "sock_impl_set_options", 00:20:50.049 "params": { 00:20:50.049 "impl_name": "uring", 00:20:50.049 "recv_buf_size": 2097152, 00:20:50.049 "send_buf_size": 2097152, 00:20:50.049 "enable_recv_pipe": true, 00:20:50.049 "enable_quickack": false, 00:20:50.049 "enable_placement_id": 0, 00:20:50.049 "enable_zerocopy_send_server": false, 00:20:50.049 "enable_zerocopy_send_client": false, 00:20:50.049 "zerocopy_threshold": 0, 00:20:50.049 "tls_version": 0, 00:20:50.049 "enable_ktls": false 00:20:50.049 } 00:20:50.049 } 00:20:50.049 ] 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "subsystem": "vmd", 00:20:50.049 "config": [] 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "subsystem": "accel", 00:20:50.049 "config": [ 00:20:50.049 { 00:20:50.049 "method": "accel_set_options", 00:20:50.049 "params": { 00:20:50.049 "small_cache_size": 128, 00:20:50.049 "large_cache_size": 16, 00:20:50.049 "task_count": 2048, 00:20:50.049 "sequence_count": 2048, 00:20:50.049 "buf_count": 2048 00:20:50.049 } 00:20:50.049 } 00:20:50.049 ] 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "subsystem": "bdev", 00:20:50.049 "config": [ 00:20:50.049 { 00:20:50.049 "method": "bdev_set_options", 00:20:50.049 "params": { 00:20:50.049 "bdev_io_pool_size": 65535, 00:20:50.049 "bdev_io_cache_size": 256, 00:20:50.049 "bdev_auto_examine": true, 00:20:50.049 "iobuf_small_cache_size": 128, 00:20:50.049 "iobuf_large_cache_size": 16 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "bdev_raid_set_options", 00:20:50.049 "params": { 00:20:50.049 "process_window_size_kb": 1024, 00:20:50.049 "process_max_bandwidth_mb_sec": 0 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "bdev_iscsi_set_options", 00:20:50.049 "params": { 00:20:50.049 "timeout_sec": 30 00:20:50.049 } 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "method": "bdev_nvme_set_options", 00:20:50.049 "params": { 00:20:50.049 "action_on_timeout": "none", 00:20:50.049 "timeout_us": 0, 00:20:50.049 "timeout_admin_us": 0, 00:20:50.049 "keep_alive_timeout_ms": 10000, 00:20:50.049 "arbitration_burst": 0, 00:20:50.049 "low_priority_weight": 0, 00:20:50.049 "medium_priority_weight": 0, 00:20:50.049 "high_priority_weight": 0, 00:20:50.049 "nvme_adminq_poll_period_us": 10000, 00:20:50.049 "nvme_ioq_poll_period_us": 0, 00:20:50.049 "io_queue_requests": 512, 00:20:50.049 "delay_cmd_submit": true, 00:20:50.049 "transport_retry_count": 4, 00:20:50.049 "bdev_retry_count": 3, 00:20:50.049 "transport_ack_timeout": 0, 00:20:50.049 "ctrlr_loss_timeout_sec": 0, 00:20:50.049 "reconnect_delay_sec": 0, 00:20:50.050 "fast_io_fail_timeout_sec": 0, 00:20:50.050 "disable_auto_failback": false, 00:20:50.050 "generate_uuids": false, 00:20:50.050 "transport_tos": 0, 00:20:50.050 "nvme_error_stat": false, 00:20:50.050 "rdma_srq_size": 0, 00:20:50.050 "io_path_stat": false, 00:20:50.050 "allow_accel_sequence": false, 00:20:50.050 "rdma_max_cq_size": 0, 00:20:50.050 "rdma_cm_event_timeout_ms": 0, 00:20:50.050 "dhchap_digests": [ 00:20:50.050 "sha256", 00:20:50.050 "sha384", 00:20:50.050 "sha512" 00:20:50.050 ], 00:20:50.050 "dhchap_dhgroups": [ 00:20:50.050 "null", 00:20:50.050 "ffdhe2048", 00:20:50.050 "ffdhe3072", 00:20:50.050 "ffdhe4096", 00:20:50.050 "ffdhe6144", 00:20:50.050 "ffdhe8192" 00:20:50.050 ] 00:20:50.050 } 00:20:50.050 }, 00:20:50.050 { 00:20:50.050 "method": "bdev_nvme_attach_controller", 00:20:50.050 "params": { 00:20:50.050 "name": "nvme0", 00:20:50.050 "trtype": "TCP", 00:20:50.050 "adrfam": "IPv4", 00:20:50.050 "traddr": "127.0.0.1", 00:20:50.050 "trsvcid": "4420", 00:20:50.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:50.050 "prchk_reftag": false, 00:20:50.050 "prchk_guard": false, 00:20:50.050 "ctrlr_loss_timeout_sec": 0, 00:20:50.050 "reconnect_delay_sec": 0, 00:20:50.050 "fast_io_fail_timeout_sec": 0, 00:20:50.050 "psk": "key0", 00:20:50.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:50.050 "hdgst": false, 00:20:50.050 "ddgst": false, 00:20:50.050 "multipath": "multipath" 00:20:50.050 } 00:20:50.050 }, 00:20:50.050 { 00:20:50.050 "method": "bdev_nvme_set_hotplug", 00:20:50.050 "params": { 00:20:50.050 "period_us": 100000, 00:20:50.050 "enable": false 00:20:50.050 } 00:20:50.050 }, 00:20:50.050 { 00:20:50.050 "method": "bdev_wait_for_examine" 00:20:50.050 } 00:20:50.050 ] 00:20:50.050 }, 00:20:50.050 { 00:20:50.050 "subsystem": "nbd", 00:20:50.050 "config": [] 00:20:50.050 } 00:20:50.050 ] 00:20:50.050 }' 00:20:50.050 11:03:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.050 11:03:43 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:20:50.050 11:03:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:50.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:50.050 11:03:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.050 11:03:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:50.050 [2024-12-09 11:03:43.065775] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:20:50.050 [2024-12-09 11:03:43.065836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85566 ] 00:20:50.050 [2024-12-09 11:03:43.217098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.309 [2024-12-09 11:03:43.282970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.309 [2024-12-09 11:03:43.436475] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:50.568 [2024-12-09 11:03:43.506347] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.827 11:03:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.827 11:03:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:20:50.827 11:03:43 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:20:50.827 11:03:43 keyring_file -- keyring/file.sh@121 -- # jq length 00:20:50.827 11:03:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:51.087 11:03:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:20:51.087 11:03:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:20:51.087 11:03:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:51.087 11:03:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:51.087 11:03:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:51.087 11:03:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:51.087 11:03:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:51.346 11:03:44 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:20:51.346 11:03:44 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:20:51.346 11:03:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:51.346 11:03:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:51.347 11:03:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:51.347 11:03:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:51.347 11:03:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:51.347 11:03:44 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:20:51.347 11:03:44 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:20:51.347 11:03:44 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:20:51.347 11:03:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:20:51.606 11:03:44 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:20:51.606 11:03:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:20:51.606 11:03:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.n9hPtAf2ZG /tmp/tmp.uNCI2KqsRi 00:20:51.606 11:03:44 keyring_file -- keyring/file.sh@20 -- # killprocess 85566 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85566 ']' 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85566 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85566 00:20:51.606 killing process with pid 85566 00:20:51.606 Received shutdown signal, test time was about 1.000000 seconds 00:20:51.606 00:20:51.606 Latency(us) 00:20:51.606 [2024-12-09T11:03:44.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.606 [2024-12-09T11:03:44.785Z] =================================================================================================================== 00:20:51.606 [2024-12-09T11:03:44.785Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85566' 00:20:51.606 11:03:44 keyring_file -- common/autotest_common.sh@973 -- # kill 85566 00:20:51.607 11:03:44 keyring_file -- common/autotest_common.sh@978 -- # wait 85566 00:20:52.175 11:03:45 keyring_file -- keyring/file.sh@21 -- # killprocess 85314 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85314 ']' 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85314 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85314 00:20:52.175 killing process with pid 85314 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85314' 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@973 -- # kill 85314 00:20:52.175 11:03:45 keyring_file -- common/autotest_common.sh@978 -- # wait 85314 00:20:52.435 00:20:52.435 real 0m13.434s 00:20:52.435 user 0m31.769s 00:20:52.435 sys 0m3.031s 00:20:52.435 11:03:45 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.435 11:03:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:52.435 ************************************ 00:20:52.435 END TEST keyring_file 00:20:52.435 ************************************ 00:20:52.435 11:03:45 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:20:52.435 11:03:45 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:52.435 11:03:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:52.435 11:03:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.435 11:03:45 -- common/autotest_common.sh@10 -- # set +x 00:20:52.435 ************************************ 00:20:52.435 START TEST keyring_linux 00:20:52.435 ************************************ 00:20:52.435 11:03:45 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:20:52.435 Joined session keyring: 601461095 00:20:52.696 * Looking for test storage... 00:20:52.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@345 -- # : 1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.696 11:03:45 keyring_linux -- scripts/common.sh@368 -- # return 0 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:52.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.696 --rc genhtml_branch_coverage=1 00:20:52.696 --rc genhtml_function_coverage=1 00:20:52.696 --rc genhtml_legend=1 00:20:52.696 --rc geninfo_all_blocks=1 00:20:52.696 --rc geninfo_unexecuted_blocks=1 00:20:52.696 00:20:52.696 ' 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:52.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.696 --rc genhtml_branch_coverage=1 00:20:52.696 --rc genhtml_function_coverage=1 00:20:52.696 --rc genhtml_legend=1 00:20:52.696 --rc geninfo_all_blocks=1 00:20:52.696 --rc geninfo_unexecuted_blocks=1 00:20:52.696 00:20:52.696 ' 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:52.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.696 --rc genhtml_branch_coverage=1 00:20:52.696 --rc genhtml_function_coverage=1 00:20:52.696 --rc genhtml_legend=1 00:20:52.696 --rc geninfo_all_blocks=1 00:20:52.696 --rc geninfo_unexecuted_blocks=1 00:20:52.696 00:20:52.696 ' 00:20:52.696 11:03:45 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:52.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.696 --rc genhtml_branch_coverage=1 00:20:52.696 --rc genhtml_function_coverage=1 00:20:52.696 --rc genhtml_legend=1 00:20:52.696 --rc geninfo_all_blocks=1 00:20:52.696 --rc geninfo_unexecuted_blocks=1 00:20:52.696 00:20:52.696 ' 00:20:52.696 11:03:45 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:52.696 11:03:45 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.696 11:03:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:20:52.696 11:03:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.696 11:03:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:0813c78c-bf40-477e-b94d-3900e5d9beb7 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=0813c78c-bf40-477e-b94d-3900e5d9beb7 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:52.697 11:03:45 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:20:52.697 11:03:45 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.697 11:03:45 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.697 11:03:45 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.697 11:03:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.697 11:03:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.697 11:03:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.697 11:03:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:20:52.697 11:03:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:52.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:20:52.697 /tmp/:spdk-test:key0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:20:52.697 11:03:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:20:52.697 11:03:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:20:52.697 11:03:45 keyring_linux -- nvmf/common.sh@733 -- # python - 00:20:52.957 11:03:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:20:52.957 /tmp/:spdk-test:key1 00:20:52.957 11:03:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:20:52.957 11:03:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85687 00:20:52.957 11:03:45 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:52.957 11:03:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85687 00:20:52.957 11:03:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85687 ']' 00:20:52.957 11:03:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.957 11:03:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.957 11:03:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.957 11:03:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.957 11:03:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:52.957 [2024-12-09 11:03:45.971378] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:20:52.957 [2024-12-09 11:03:45.971465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85687 ] 00:20:52.957 [2024-12-09 11:03:46.117791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.217 [2024-12-09 11:03:46.161689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.217 [2024-12-09 11:03:46.216816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:53.786 11:03:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.786 11:03:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:53.786 11:03:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:20:53.786 11:03:46 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.786 11:03:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:53.786 [2024-12-09 11:03:46.786743] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.786 null0 00:20:53.786 [2024-12-09 11:03:46.818657] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.787 [2024-12-09 11:03:46.818836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.787 11:03:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:20:53.787 213044497 00:20:53.787 11:03:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:20:53.787 543775448 00:20:53.787 11:03:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85700 00:20:53.787 11:03:46 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:20:53.787 11:03:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85700 /var/tmp/bperf.sock 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 85700 ']' 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.787 11:03:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:53.787 [2024-12-09 11:03:46.898437] Starting SPDK v25.01-pre git sha1 25cdf096c / DPDK 24.03.0 initialization... 00:20:53.787 [2024-12-09 11:03:46.898493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85700 ] 00:20:54.046 [2024-12-09 11:03:47.027814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.046 [2024-12-09 11:03:47.092846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.616 11:03:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.616 11:03:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:20:54.616 11:03:47 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:20:54.616 11:03:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:20:54.875 11:03:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:20:54.875 11:03:47 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:55.135 [2024-12-09 11:03:48.144395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:55.135 11:03:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:55.135 11:03:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:20:55.395 [2024-12-09 11:03:48.381278] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.395 nvme0n1 00:20:55.395 11:03:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:20:55.395 11:03:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:20:55.395 11:03:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:55.395 11:03:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:55.395 11:03:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:55.395 11:03:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:55.655 11:03:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:20:55.655 11:03:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:55.655 11:03:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:20:55.655 11:03:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:20:55.655 11:03:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:20:55.655 11:03:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:55.655 11:03:48 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@25 -- # sn=213044497 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 213044497 == \2\1\3\0\4\4\4\9\7 ]] 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 213044497 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:20:55.915 11:03:48 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:55.915 Running I/O for 1 seconds... 00:20:56.854 18252.00 IOPS, 71.30 MiB/s 00:20:56.854 Latency(us) 00:20:56.854 [2024-12-09T11:03:50.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.854 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:56.854 nvme0n1 : 1.01 18254.13 71.31 0.00 0.00 6985.74 3319.73 9501.29 00:20:56.854 [2024-12-09T11:03:50.033Z] =================================================================================================================== 00:20:56.854 [2024-12-09T11:03:50.033Z] Total : 18254.13 71.31 0.00 0.00 6985.74 3319.73 9501.29 00:20:56.854 { 00:20:56.854 "results": [ 00:20:56.854 { 00:20:56.854 "job": "nvme0n1", 00:20:56.854 "core_mask": "0x2", 00:20:56.854 "workload": "randread", 00:20:56.854 "status": "finished", 00:20:56.854 "queue_depth": 128, 00:20:56.854 "io_size": 4096, 00:20:56.854 "runtime": 1.00695, 00:20:56.854 "iops": 18254.13377029644, 00:20:56.854 "mibps": 71.30521004022047, 00:20:56.854 "io_failed": 0, 00:20:56.854 "io_timeout": 0, 00:20:56.854 "avg_latency_us": 6985.738344488529, 00:20:56.854 "min_latency_us": 3319.7275109170305, 00:20:56.854 "max_latency_us": 9501.289082969432 00:20:56.854 } 00:20:56.854 ], 00:20:56.854 "core_count": 1 00:20:56.854 } 00:20:56.854 11:03:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:20:56.855 11:03:49 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:20:57.114 11:03:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:20:57.114 11:03:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:20:57.114 11:03:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:20:57.114 11:03:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:20:57.114 11:03:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:20:57.114 11:03:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:57.374 11:03:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:20:57.374 11:03:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:20:57.374 11:03:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:20:57.374 11:03:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:57.374 11:03:50 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:20:57.374 11:03:50 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:57.374 11:03:50 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:20:57.374 11:03:50 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.374 11:03:50 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:20:57.374 11:03:50 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:57.375 11:03:50 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:57.375 11:03:50 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:20:57.635 [2024-12-09 11:03:50.593419] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:57.635 [2024-12-09 11:03:50.593555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c21d0 (107): Transport endpoint is not connected 00:20:57.635 [2024-12-09 11:03:50.594545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6c21d0 (9): Bad file descriptor 00:20:57.635 [2024-12-09 11:03:50.595543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:20:57.635 [2024-12-09 11:03:50.595562] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:20:57.635 [2024-12-09 11:03:50.595568] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:20:57.635 [2024-12-09 11:03:50.595575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:20:57.635 request: 00:20:57.635 { 00:20:57.635 "name": "nvme0", 00:20:57.635 "trtype": "tcp", 00:20:57.635 "traddr": "127.0.0.1", 00:20:57.635 "adrfam": "ipv4", 00:20:57.635 "trsvcid": "4420", 00:20:57.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:57.635 "prchk_reftag": false, 00:20:57.635 "prchk_guard": false, 00:20:57.635 "hdgst": false, 00:20:57.635 "ddgst": false, 00:20:57.635 "psk": ":spdk-test:key1", 00:20:57.635 "allow_unrecognized_csi": false, 00:20:57.635 "method": "bdev_nvme_attach_controller", 00:20:57.635 "req_id": 1 00:20:57.635 } 00:20:57.635 Got JSON-RPC error response 00:20:57.635 response: 00:20:57.635 { 00:20:57.635 "code": -5, 00:20:57.635 "message": "Input/output error" 00:20:57.635 } 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@33 -- # sn=213044497 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 213044497 00:20:57.635 1 links removed 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@33 -- # sn=543775448 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 543775448 00:20:57.635 1 links removed 00:20:57.635 11:03:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85700 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85700 ']' 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85700 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85700 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.635 killing process with pid 85700 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85700' 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@973 -- # kill 85700 00:20:57.635 Received shutdown signal, test time was about 1.000000 seconds 00:20:57.635 00:20:57.635 Latency(us) 00:20:57.635 [2024-12-09T11:03:50.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.635 [2024-12-09T11:03:50.814Z] =================================================================================================================== 00:20:57.635 [2024-12-09T11:03:50.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.635 11:03:50 keyring_linux -- common/autotest_common.sh@978 -- # wait 85700 00:20:57.895 11:03:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85687 00:20:57.895 11:03:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 85687 ']' 00:20:57.895 11:03:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 85687 00:20:57.895 11:03:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:20:57.895 11:03:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.895 11:03:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85687 00:20:57.895 11:03:51 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.895 11:03:51 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.895 killing process with pid 85687 00:20:57.895 11:03:51 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85687' 00:20:57.895 11:03:51 keyring_linux -- common/autotest_common.sh@973 -- # kill 85687 00:20:57.895 11:03:51 keyring_linux -- common/autotest_common.sh@978 -- # wait 85687 00:20:58.465 00:20:58.465 real 0m5.849s 00:20:58.465 user 0m10.441s 00:20:58.465 sys 0m1.637s 00:20:58.465 11:03:51 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:58.465 11:03:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:20:58.465 ************************************ 00:20:58.465 END TEST keyring_linux 00:20:58.465 ************************************ 00:20:58.465 11:03:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:58.465 11:03:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:58.465 11:03:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:58.465 11:03:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:58.465 11:03:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:20:58.465 11:03:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:20:58.465 11:03:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:20:58.465 11:03:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.465 11:03:51 -- common/autotest_common.sh@10 -- # set +x 00:20:58.465 11:03:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:20:58.465 11:03:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:20:58.465 11:03:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:20:58.465 11:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:01.008 INFO: APP EXITING 00:21:01.008 INFO: killing all VMs 00:21:01.008 INFO: killing vhost app 00:21:01.008 INFO: EXIT DONE 00:21:01.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:01.949 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:01.949 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:02.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:02.887 Cleaning 00:21:02.887 Removing: /var/run/dpdk/spdk0/config 00:21:02.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:02.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:02.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:02.887 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:02.888 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:02.888 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:02.888 Removing: /var/run/dpdk/spdk1/config 00:21:02.888 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:02.888 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:02.888 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:02.888 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:02.888 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:02.888 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:02.888 Removing: /var/run/dpdk/spdk2/config 00:21:02.888 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:02.888 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:02.888 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:02.888 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:02.888 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:02.888 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:02.888 Removing: /var/run/dpdk/spdk3/config 00:21:02.888 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:02.888 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:02.888 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:02.888 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:02.888 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:02.888 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:02.888 Removing: /var/run/dpdk/spdk4/config 00:21:02.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:02.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:02.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:02.888 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:02.888 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:02.888 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:02.888 Removing: /dev/shm/nvmf_trace.0 00:21:02.888 Removing: /dev/shm/spdk_tgt_trace.pid56951 00:21:02.888 Removing: /var/run/dpdk/spdk0 00:21:02.888 Removing: /var/run/dpdk/spdk1 00:21:02.888 Removing: /var/run/dpdk/spdk2 00:21:02.888 Removing: /var/run/dpdk/spdk3 00:21:02.888 Removing: /var/run/dpdk/spdk4 00:21:02.888 Removing: /var/run/dpdk/spdk_pid56798 00:21:02.888 Removing: /var/run/dpdk/spdk_pid56951 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57157 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57238 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57271 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57375 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57393 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57538 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57728 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57882 00:21:02.888 Removing: /var/run/dpdk/spdk_pid57960 00:21:02.888 Removing: /var/run/dpdk/spdk_pid58039 00:21:02.888 Removing: /var/run/dpdk/spdk_pid58138 00:21:02.888 Removing: /var/run/dpdk/spdk_pid58215 00:21:02.888 Removing: /var/run/dpdk/spdk_pid58248 00:21:03.148 Removing: /var/run/dpdk/spdk_pid58284 00:21:03.148 Removing: /var/run/dpdk/spdk_pid58353 00:21:03.148 Removing: /var/run/dpdk/spdk_pid58453 00:21:03.148 Removing: /var/run/dpdk/spdk_pid58886 00:21:03.148 Removing: /var/run/dpdk/spdk_pid58938 00:21:03.148 Removing: /var/run/dpdk/spdk_pid58988 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59004 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59072 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59084 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59145 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59161 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59207 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59225 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59265 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59283 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59413 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59449 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59531 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59863 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59881 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59912 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59925 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59941 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59960 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59979 00:21:03.148 Removing: /var/run/dpdk/spdk_pid59994 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60019 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60027 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60048 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60067 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60080 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60096 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60117 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60136 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60146 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60165 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60184 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60203 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60230 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60250 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60280 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60346 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60380 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60390 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60418 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60433 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60435 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60483 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60493 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60527 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60537 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60546 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60556 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60565 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60575 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60584 00:21:03.148 Removing: /var/run/dpdk/spdk_pid60594 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60622 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60654 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60664 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60692 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60702 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60709 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60750 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60761 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60788 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60801 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60803 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60816 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60823 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60831 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60838 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60846 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60928 00:21:03.409 Removing: /var/run/dpdk/spdk_pid60970 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61088 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61125 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61161 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61181 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61203 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61218 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61249 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61270 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61350 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61367 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61411 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61471 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61516 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61545 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61653 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61696 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61740 00:21:03.409 Removing: /var/run/dpdk/spdk_pid61968 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62064 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62091 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62122 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62159 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62189 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62228 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62254 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62649 00:21:03.409 Removing: /var/run/dpdk/spdk_pid62692 00:21:03.409 Removing: /var/run/dpdk/spdk_pid63046 00:21:03.409 Removing: /var/run/dpdk/spdk_pid63511 00:21:03.409 Removing: /var/run/dpdk/spdk_pid63770 00:21:03.409 Removing: /var/run/dpdk/spdk_pid64672 00:21:03.409 Removing: /var/run/dpdk/spdk_pid65589 00:21:03.409 Removing: /var/run/dpdk/spdk_pid65706 00:21:03.409 Removing: /var/run/dpdk/spdk_pid65774 00:21:03.409 Removing: /var/run/dpdk/spdk_pid67186 00:21:03.409 Removing: /var/run/dpdk/spdk_pid67502 00:21:03.409 Removing: /var/run/dpdk/spdk_pid70899 00:21:03.409 Removing: /var/run/dpdk/spdk_pid71248 00:21:03.409 Removing: /var/run/dpdk/spdk_pid71363 00:21:03.409 Removing: /var/run/dpdk/spdk_pid71498 00:21:03.669 Removing: /var/run/dpdk/spdk_pid71526 00:21:03.669 Removing: /var/run/dpdk/spdk_pid71555 00:21:03.669 Removing: /var/run/dpdk/spdk_pid71583 00:21:03.669 Removing: /var/run/dpdk/spdk_pid71682 00:21:03.669 Removing: /var/run/dpdk/spdk_pid71815 00:21:03.669 Removing: /var/run/dpdk/spdk_pid71994 00:21:03.669 Removing: /var/run/dpdk/spdk_pid72070 00:21:03.669 Removing: /var/run/dpdk/spdk_pid72259 00:21:03.669 Removing: /var/run/dpdk/spdk_pid72336 00:21:03.669 Removing: /var/run/dpdk/spdk_pid72429 00:21:03.669 Removing: /var/run/dpdk/spdk_pid72782 00:21:03.669 Removing: /var/run/dpdk/spdk_pid73204 00:21:03.669 Removing: /var/run/dpdk/spdk_pid73205 00:21:03.669 Removing: /var/run/dpdk/spdk_pid73206 00:21:03.669 Removing: /var/run/dpdk/spdk_pid73481 00:21:03.669 Removing: /var/run/dpdk/spdk_pid73746 00:21:03.669 Removing: /var/run/dpdk/spdk_pid74133 00:21:03.669 Removing: /var/run/dpdk/spdk_pid74140 00:21:03.669 Removing: /var/run/dpdk/spdk_pid74464 00:21:03.669 Removing: /var/run/dpdk/spdk_pid74484 00:21:03.670 Removing: /var/run/dpdk/spdk_pid74502 00:21:03.670 Removing: /var/run/dpdk/spdk_pid74534 00:21:03.670 Removing: /var/run/dpdk/spdk_pid74541 00:21:03.670 Removing: /var/run/dpdk/spdk_pid74908 00:21:03.670 Removing: /var/run/dpdk/spdk_pid74955 00:21:03.670 Removing: /var/run/dpdk/spdk_pid75281 00:21:03.670 Removing: /var/run/dpdk/spdk_pid75484 00:21:03.670 Removing: /var/run/dpdk/spdk_pid75911 00:21:03.670 Removing: /var/run/dpdk/spdk_pid76467 00:21:03.670 Removing: /var/run/dpdk/spdk_pid77279 00:21:03.670 Removing: /var/run/dpdk/spdk_pid77925 00:21:03.670 Removing: /var/run/dpdk/spdk_pid77929 00:21:03.670 Removing: /var/run/dpdk/spdk_pid79947 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80003 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80062 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80119 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80234 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80289 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80349 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80404 00:21:03.670 Removing: /var/run/dpdk/spdk_pid80776 00:21:03.670 Removing: /var/run/dpdk/spdk_pid81981 00:21:03.670 Removing: /var/run/dpdk/spdk_pid82129 00:21:03.670 Removing: /var/run/dpdk/spdk_pid82371 00:21:03.670 Removing: /var/run/dpdk/spdk_pid82985 00:21:03.670 Removing: /var/run/dpdk/spdk_pid83139 00:21:03.670 Removing: /var/run/dpdk/spdk_pid83302 00:21:03.670 Removing: /var/run/dpdk/spdk_pid83404 00:21:03.670 Removing: /var/run/dpdk/spdk_pid83573 00:21:03.670 Removing: /var/run/dpdk/spdk_pid83686 00:21:03.670 Removing: /var/run/dpdk/spdk_pid84422 00:21:03.670 Removing: /var/run/dpdk/spdk_pid84457 00:21:03.670 Removing: /var/run/dpdk/spdk_pid84492 00:21:03.670 Removing: /var/run/dpdk/spdk_pid84770 00:21:03.670 Removing: /var/run/dpdk/spdk_pid84800 00:21:03.670 Removing: /var/run/dpdk/spdk_pid84835 00:21:03.931 Removing: /var/run/dpdk/spdk_pid85314 00:21:03.931 Removing: /var/run/dpdk/spdk_pid85331 00:21:03.931 Removing: /var/run/dpdk/spdk_pid85566 00:21:03.931 Removing: /var/run/dpdk/spdk_pid85687 00:21:03.931 Removing: /var/run/dpdk/spdk_pid85700 00:21:03.931 Clean 00:21:03.931 11:03:56 -- common/autotest_common.sh@1453 -- # return 0 00:21:03.931 11:03:56 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:03.931 11:03:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.931 11:03:56 -- common/autotest_common.sh@10 -- # set +x 00:21:03.931 11:03:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:03.931 11:03:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.931 11:03:57 -- common/autotest_common.sh@10 -- # set +x 00:21:03.931 11:03:57 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:03.931 11:03:57 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:03.931 11:03:57 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:03.931 11:03:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:03.931 11:03:57 -- spdk/autotest.sh@398 -- # hostname 00:21:03.931 11:03:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:04.191 geninfo: WARNING: invalid characters removed from testname! 00:21:30.756 11:04:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:30.756 11:04:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:32.144 11:04:25 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:34.055 11:04:27 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:36.662 11:04:29 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:38.571 11:04:31 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:41.107 11:04:33 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:41.107 11:04:33 -- spdk/autorun.sh@1 -- $ timing_finish 00:21:41.107 11:04:33 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:21:41.107 11:04:33 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:41.108 11:04:33 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:41.108 11:04:33 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:41.108 + [[ -n 5433 ]] 00:21:41.108 + sudo kill 5433 00:21:41.117 [Pipeline] } 00:21:41.133 [Pipeline] // timeout 00:21:41.138 [Pipeline] } 00:21:41.153 [Pipeline] // stage 00:21:41.159 [Pipeline] } 00:21:41.173 [Pipeline] // catchError 00:21:41.182 [Pipeline] stage 00:21:41.184 [Pipeline] { (Stop VM) 00:21:41.196 [Pipeline] sh 00:21:41.479 + vagrant halt 00:21:43.383 ==> default: Halting domain... 00:21:51.526 [Pipeline] sh 00:21:51.809 + vagrant destroy -f 00:21:54.355 ==> default: Removing domain... 00:21:54.367 [Pipeline] sh 00:21:54.650 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:54.659 [Pipeline] } 00:21:54.673 [Pipeline] // stage 00:21:54.678 [Pipeline] } 00:21:54.692 [Pipeline] // dir 00:21:54.696 [Pipeline] } 00:21:54.708 [Pipeline] // wrap 00:21:54.712 [Pipeline] } 00:21:54.721 [Pipeline] // catchError 00:21:54.727 [Pipeline] stage 00:21:54.729 [Pipeline] { (Epilogue) 00:21:54.738 [Pipeline] sh 00:21:55.019 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:00.309 [Pipeline] catchError 00:22:00.311 [Pipeline] { 00:22:00.325 [Pipeline] sh 00:22:00.719 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:00.719 Artifacts sizes are good 00:22:00.784 [Pipeline] } 00:22:00.798 [Pipeline] // catchError 00:22:00.809 [Pipeline] archiveArtifacts 00:22:00.816 Archiving artifacts 00:22:00.955 [Pipeline] cleanWs 00:22:00.969 [WS-CLEANUP] Deleting project workspace... 00:22:00.969 [WS-CLEANUP] Deferred wipeout is used... 00:22:00.976 [WS-CLEANUP] done 00:22:00.978 [Pipeline] } 00:22:00.994 [Pipeline] // stage 00:22:01.000 [Pipeline] } 00:22:01.015 [Pipeline] // node 00:22:01.021 [Pipeline] End of Pipeline 00:22:01.060 Finished: SUCCESS